3,051 research outputs found

    INDEMICS: An Interactive High-Performance Computing Framework for Data Intensive Epidemic Modeling

    Get PDF
    We describe the design and prototype implementation of Indemics (_Interactive; Epi_demic; _Simulation;)—a modeling environment utilizing high-performance computing technologies for supporting complex epidemic simulations. Indemics can support policy analysts and epidemiologists interested in planning and control of pandemics. Indemics goes beyond traditional epidemic simulations by providing a simple and powerful way to represent and analyze policy-based as well as individual-based adaptive interventions. Users can also stop the simulation at any point, assess the state of the simulated system, and add additional interventions. Indemics is available to end-users via a web-based interface. Detailed performance analysis shows that Indemics greatly enhances the capability and productivity of simulating complex intervention strategies with a marginal decrease in performance. We also demonstrate how Indemics was applied in some real case studies where complex interventions were implemented

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    CONFIGURATION MANAGEMENT IN MANUFACTURING AND ASSEMBLY: CASE STUDY AND ENABLER DEVELOPMENT

    Get PDF
    The overall goal of this research is to improve the product configuration change management process. The increase in the demand for highly customizable products has led to many manufacturers using mass customization to meet the constantly changing demands of a wide consumer base. However, effectively managing the configurations can be difficult, especially in large manufacturers or for complex products with a large number of possible configurations. This is largely due to a combination of the scope of the configuration management system and the difficulty in understanding how changes to one element of a configuration can propagate through the configuration system. To increase the engineer\u27s ability to understand the configuration management system and how changes can affect it, an improved method is required. Based on the results of a case study at a major automotive OEM, a configuration change management method is developed to address the aforementioned gap. In addition, a set of design enablers is deployed as part of the method. The major contribution of this work is the improved method for configuration change management and the use of graph visualization in exploring configuration changes. The use of graph visualizations for configuration management is validated through a user study, four implementation studies using ongoing configuration changes at the OEM, and user feedback and evaluation. The method is validated through application in three historical cases and user feedback. The results show that the method increases the capabilities of the engineer in exploring a proposed configuration change and identifying any potential errors

    Enhancing community detection using a network weighting strategy

    Full text link
    A community within a network is a group of vertices densely connected to each other but less connected to the vertices outside. The problem of detecting communities in large networks plays a key role in a wide range of research areas, e.g. Computer Science, Biology and Sociology. Most of the existing algorithms to find communities count on the topological features of the network and often do not scale well on large, real-life instances. In this article we propose a strategy to enhance existing community detection algorithms by adding a pre-processing step in which edges are weighted according to their centrality w.r.t. the network topology. In our approach, the centrality of an edge reflects its contribute to making arbitrary graph tranversals, i.e., spreading messages over the network, as short as possible. Our strategy is able to effectively complements information about network topology and it can be used as an additional tool to enhance community detection. The computation of edge centralities is carried out by performing multiple random walks of bounded length on the network. Our method makes the computation of edge centralities feasible also on large-scale networks. It has been tested in conjunction with three state-of-the-art community detection algorithms, namely the Louvain method, COPRA and OSLOM. Experimental results show that our method raises the accuracy of existing algorithms both on synthetic and real-life datasets.Comment: 28 pages, 2 figure

    Enhancing Biomedical Lay Summarisation with External Knowledge Graphs

    Full text link
    Previous approaches for automatic lay summarisation are exclusively reliant on the source article that, given it is written for a technical audience (e.g., researchers), is unlikely to explicitly define all technical concepts or state all of the background information that is relevant for a lay audience. We address this issue by augmenting eLife, an existing biomedical lay summarisation dataset, with article-specific knowledge graphs, each containing detailed information on relevant biomedical concepts. Using both automatic and human evaluations, we systematically investigate the effectiveness of three different approaches for incorporating knowledge graphs within lay summarisation models, with each method targeting a distinct area of the encoder-decoder model architecture. Our results confirm that integrating graph-based domain knowledge can significantly benefit lay summarisation by substantially increasing the readability of generated text and improving the explanation of technical concepts.Comment: Accepted to the EMNLP 2023 main conferenc

    Optimizing compilation with preservation of structural code coverage metrics to support software testing

    Get PDF
    Code-coverage-based testing is a widely-used testing strategy with the aim of providing a meaningful decision criterion for the adequacy of a test suite. Code-coverage-based testing is also mandated for the development of safety-critical applications; for example, the DO178b document requires the application of the modified condition/decision coverage. One critical issue of code-coverage testing is that structural code coverage criteria are typically applied to source code whereas the generated machine code may result in a different code structure because of code optimizations performed by a compiler. In this work, we present the automatic calculation of coverage profiles describing which structural code-coverage criteria are preserved by which code optimization, independently of the concrete test suite. These coverage profiles allow to easily extend compilers with the feature of preserving any given code-coverage criteria by enabling only those code optimizations that preserve it. Furthermore, we describe the integration of these coverage profile into the compiler GCC. With these coverage profiles, we answer the question of how much code optimization is possible without compromising the error-detection likelihood of a given test suite. Experimental results conclude that the performance cost to achieve preservation of structural code coverage in GCC is rather low.Peer reviewedSubmitted Versio

    DEPLOYING, IMPROVING AND EVALUATING EDGE BUNDLING METHODS FOR VISUALIZING LARGE GRAPHS

    Get PDF
    A tremendous increase in the scale of graphs has been witnessed in a wide range of fields, which demands efficient and effective visualization techniques to assist users in better understandings of large graphs. Conventional node-link diagrams are often used to visualize graphs, whereas excessive edge crossings can easily incur severe visual clutter in the node-link diagram of a large graph. Edge bundling can effectively remedy visual clutter and reveal high-level graph structures. Although significant efforts have been devoted to developing edge bundling, three challenging problems remain. First, edge bundling techniques are often computationally expensive and are not easy to deploy for web-based applications. The state-of-the-art edge bundling methods often require special system supports and techniques such as high-end GPU acceleration for large graphs, which makes these methods less portable, especially for ubiquitous mobile devices. Second, the quantitative quality of edge bundling results is barely assessed in the literature. Currently, the comparison of edge bundling mainly focuses on computational performance and perceptual results. Third, although the family of edge bundling techniques has a rich set of bundling layout, there is a lack of a generic method to generate different styles of edge bundling. In this research, I aim to address these problems and have made the following contributions. First, I provide an efficient framework to deploy edge bundling for web-based platforms by exploiting standard graphics hardware functions and libraries. My framework can generate high-quality edge bundling results on web-based platforms, and achieve a speedup of 50X compared to the previous state-of-the-art edge bundling method on a graph with half of a million edges. Second, I propose a new moving least squares based approach to lower the algorithm complexity of edge bundling. In addition, my approach can generate better bundling results compared to other methods based on a quality metric. Third, I provide an information-theoretic metric to evaluate the edge bundling methods. I leverage information theory in this metric. With my information-theoretic metric, domain users can choose appropriate edge bundling methods with proper parameters for their applications. Last but not least, I present a deep learning framework for edge bundling visualizations. Through a training process that learns the results of a specific edge bundling method, my deep learning framework can infer the final layout of the edge bundling method. My deep learning framework is a generic framework that can generate the corresponding results of different edge bundling methods. Adviser: Hongfeng Y

    Automating Security Risk and Requirements Management for Cyber-Physical Systems

    Get PDF
    Cyber-physische Systeme ermöglichen zahlreiche moderne Anwendungsfälle und Geschäftsmodelle wie vernetzte Fahrzeuge, das intelligente Stromnetz (Smart Grid) oder das industrielle Internet der Dinge. Ihre Schlüsselmerkmale Komplexität, Heterogenität und Langlebigkeit machen den langfristigen Schutz dieser Systeme zu einer anspruchsvollen, aber unverzichtbaren Aufgabe. In der physischen Welt stellen die Gesetze der Physik einen festen Rahmen für Risiken und deren Behandlung dar. Im Cyberspace gibt es dagegen keine vergleichbare Konstante, die der Erosion von Sicherheitsmerkmalen entgegenwirkt. Hierdurch können sich bestehende Sicherheitsrisiken laufend ändern und neue entstehen. Um Schäden durch böswillige Handlungen zu verhindern, ist es notwendig, hohe und unbekannte Risiken frühzeitig zu erkennen und ihnen angemessen zu begegnen. Die Berücksichtigung der zahlreichen dynamischen sicherheitsrelevanten Faktoren erfordert einen neuen Automatisierungsgrad im Management von Sicherheitsrisiken und -anforderungen, der über den aktuellen Stand der Wissenschaft und Technik hinausgeht. Nur so kann langfristig ein angemessenes, umfassendes und konsistentes Sicherheitsniveau erreicht werden. Diese Arbeit adressiert den dringenden Bedarf an einer Automatisierungsmethodik bei der Analyse von Sicherheitsrisiken sowie der Erzeugung und dem Management von Sicherheitsanforderungen für Cyber-physische Systeme. Das dazu vorgestellte Rahmenwerk umfasst drei Komponenten: (1) eine modelbasierte Methodik zur Ermittlung und Bewertung von Sicherheitsrisiken; (2) Methoden zur Vereinheitlichung, Ableitung und Verwaltung von Sicherheitsanforderungen sowie (3) eine Reihe von Werkzeugen und Verfahren zur Erkennung und Reaktion auf sicherheitsrelevante Situationen. Der Schutzbedarf und die angemessene Stringenz werden durch die Sicherheitsrisikobewertung mit Hilfe von Graphen und einer sicherheitsspezifischen Modellierung ermittelt und bewertet. Basierend auf dem Modell und den bewerteten Risiken werden anschließend fundierte Sicherheitsanforderungen zum Schutz des Gesamtsystems und seiner Funktionalität systematisch abgeleitet und in einer einheitlichen, maschinenlesbaren Struktur formuliert. Diese maschinenlesbare Struktur ermöglicht es, Sicherheitsanforderungen automatisiert entlang der Lieferkette zu propagieren. Ebenso ermöglicht sie den effizienten Abgleich der vorhandenen Fähigkeiten mit externen Sicherheitsanforderungen aus Vorschriften, Prozessen und von Geschäftspartnern. Trotz aller getroffenen Maßnahmen verbleibt immer ein gewisses Restrisiko einer Kompromittierung, worauf angemessen reagiert werden muss. Dieses Restrisiko wird durch Werkzeuge und Prozesse adressiert, die sowohl die lokale und als auch die großräumige Erkennung, Klassifizierung und Korrelation von Vorfällen verbessern. Die Integration der Erkenntnisse aus solchen Vorfällen in das Modell führt häufig zu aktualisierten Bewertungen, neuen Anforderungen und verbessert weitere Analysen. Abschließend wird das vorgestellte Rahmenwerk anhand eines aktuellen Anwendungsfalls aus dem Automobilbereich demonstriert.Cyber-Physical Systems enable various modern use cases and business models such as connected vehicles, the Smart (power) Grid, or the Industrial Internet of Things. Their key characteristics, complexity, heterogeneity, and longevity make the long-term protection of these systems a demanding but indispensable task. In the physical world, the laws of physics provide a constant scope for risks and their treatment. In cyberspace, on the other hand, there is no such constant to counteract the erosion of security features. As a result, existing security risks can constantly change and new ones can arise. To prevent damage caused by malicious acts, it is necessary to identify high and unknown risks early and counter them appropriately. Considering the numerous dynamic security-relevant factors requires a new level of automation in the management of security risks and requirements, which goes beyond the current state of the art. Only in this way can an appropriate, comprehensive, and consistent level of security be achieved in the long term. This work addresses the pressing lack of an automation methodology for the security-risk assessment as well as the generation and management of security requirements for Cyber-Physical Systems. The presented framework accordingly comprises three components: (1) a model-based security risk assessment methodology, (2) methods to unify, deduce and manage security requirements, and (3) a set of tools and procedures to detect and respond to security-relevant situations. The need for protection and the appropriate rigor are determined and evaluated by the security risk assessment using graphs and a security-specific modeling. Based on the model and the assessed risks, well-founded security requirements for protecting the overall system and its functionality are systematically derived and formulated in a uniform, machine-readable structure. This machine-readable structure makes it possible to propagate security requirements automatically along the supply chain. Furthermore, they enable the efficient reconciliation of present capabilities with external security requirements from regulations, processes, and business partners. Despite all measures taken, there is always a slight risk of compromise, which requires an appropriate response. This residual risk is addressed by tools and processes that improve the local and large-scale detection, classification, and correlation of incidents. Integrating the findings from such incidents into the model often leads to updated assessments, new requirements, and improves further analyses. Finally, the presented framework is demonstrated by a recent application example from the automotive domain
    • …
    corecore