127,730 research outputs found

    Active Virtual Network Management Prediction: Complexity as a Framework for Prediction, Optimization, and Assurance

    Full text link
    Research into active networking has provided the incentive to re-visit what has traditionally been classified as distinct properties and characteristics of information transfer such as protocol versus service; at a more fundamental level this paper considers the blending of computation and communication by means of complexity. The specific service examined in this paper is network self-prediction enabled by Active Virtual Network Management Prediction. Computation/communication is analyzed via Kolmogorov Complexity. The result is a mechanism to understand and improve the performance of active networking and Active Virtual Network Management Prediction in particular. The Active Virtual Network Management Prediction mechanism allows information, in various states of algorithmic and static form, to be transported in the service of prediction for network management. The results are generally applicable to algorithmic transmission of information. Kolmogorov Complexity is used and experimentally validated as a theory describing the relationship among algorithmic compression, complexity, and prediction accuracy within an active network. Finally, the paper concludes with a complexity-based framework for Information Assurance that attempts to take a holistic view of vulnerability analysis

    A generic physical vulnerability model for floods: review and concept for data-scarce regions

    Get PDF
    The use of different methods for physical flood vulnerability assessment has evolved over time, from traditional single-parameter stage–damage curves to multi-parameter approaches such as multivariate or indicator-based models. However, despite the extensive implementation of these models in flood risk assessment globally, a considerable gap remains in their applicability to data-scarce regions. Considering that these regions are mostly areas with a limited capacity to cope with disasters, there is an essential need for assessing the physical vulnerability of the built environment and contributing to an improvement of flood risk reduction. To close this gap, we propose linking approaches with reduced data requirements, such as vulnerability indicators (integrating major damage drivers) and damage grades (integrating frequently observed damage patterns). First, we present a review of current studies of physical vulnerability indicators and flood damage models comprised of stage–damage curves and the multivariate methods that have been applied to predict damage grades. Second, we propose a new conceptual framework for assessing the physical vulnerability of buildings exposed to flood hazards that has been specifically tailored for use in data-scarce regions. This framework is operationalized in three steps: (i) developing a vulnerability index, (ii) identifying regional damage grades, and (iii) linking resulting index classes with damage patterns, utilizing a synthetic “what-if” analysis. The new framework is a first step for enhancing flood damage prediction to support risk reduction in data-scarce regions. It addresses selected gaps in the literature by extending the application of the vulnerability index for damage grade prediction through the use of a synthetic multi-parameter approach. The framework can be adapted to different data-scarce regions and allows for integrating possible modifications to damage drivers and damage grades

    Seismic vulnerability of building aggregates through hybrid and indirect assessment techniques

    Get PDF
    This work approaches the seismic vulnerability assessment of an old stone masonry building aggregate, located in San Pio delle Camere (Abruzzo, Italy), slightly affected by the 2009 April 6th earthquake occurred in L’Aquila and its districts. This building aggregate has been modelled by using the 3muri software for seismic analysis of masonry constructions. On one hand, static non-linear numerical analyses were performed to obtain capacity curves together with the prediction of damage distributions for the input seismic action (hybrid technique). On the other hand, indirect techniques, based on different vulnerability index formulations, were used for assessing the building aggregate’s behaviour under earthquake action. The activities carried out have provided a clear framework on the seismic vulnerability of building aggregates, as well as aid future retrofitting interventions

    Illuminati: Towards Explaining Graph Neural Networks for Cybersecurity Analysis

    Full text link
    Graph neural networks (GNNs) have been utilized to create multi-layer graph models for a number of cybersecurity applications from fraud detection to software vulnerability analysis. Unfortunately, like traditional neural networks, GNNs also suffer from a lack of transparency, that is, it is challenging to interpret the model predictions. Prior works focused on specific factor explanations for a GNN model. In this work, we have designed and implemented Illuminati, a comprehensive and accurate explanation framework for cybersecurity applications using GNN models. Given a graph and a pre-trained GNN model, Illuminati is able to identify the important nodes, edges, and attributes that are contributing to the prediction while requiring no prior knowledge of GNN models. We evaluate Illuminati in two cybersecurity applications, i.e., code vulnerability detection and smart contract vulnerability detection. The experiments show that Illuminati achieves more accurate explanation results than state-of-the-art methods, specifically, 87.6% of subgraphs identified by Illuminati are able to retain their original prediction, an improvement of 10.3% over others at 77.3%. Furthermore, the explanation of Illuminati can be easily understood by the domain experts, suggesting the significant usefulness for the development of cybersecurity applications.Comment: EuroS&P 202

    M-STAR: A Modular, Evidence-based Software Trustworthiness Framework

    Full text link
    Despite years of intensive research in the field of software vulnerabilities discovery, exploits are becoming ever more common. Consequently, it is more necessary than ever to choose software configurations that minimize systems' exposure surface to these threats. In order to support users in assessing the security risks induced by their software configurations and in making informed decisions, we introduce M-STAR, a Modular Software Trustworthiness ARchitecture and framework for probabilistically assessing the trustworthiness of software systems, based on evidence, such as their vulnerability history and source code properties. Integral to M-STAR is a software trustworthiness model, consistent with the concept of computational trust. Computational trust models are rooted in Bayesian probability and Dempster-Shafer Belief theory, offering mathematical soundness and expressiveness to our framework. To evaluate our framework, we instantiate M-STAR for Debian Linux packages, and investigate real-world deployment scenarios. In our experiments with real-world data, M-STAR could assess the relative trustworthiness of complete software configurations with an error of less than 10%. Due to its modular design, our proposed framework is agile, as it can incorporate future advances in the field of code analysis and vulnerability prediction. Our results point out that M-STAR can be a valuable tool for system administrators, regular users and developers, helping them assess and manage risks associated with their software configurations.Comment: 18 pages, 13 figure

    Reactive point processes: A new approach to predicting power failures in underground electrical systems

    Full text link
    Reactive point processes (RPPs) are a new statistical model designed for predicting discrete events in time based on past history. RPPs were developed to handle an important problem within the domain of electrical grid reliability: short-term prediction of electrical grid failures ("manhole events"), including outages, fires, explosions and smoking manholes, which can cause threats to public safety and reliability of electrical service in cities. RPPs incorporate self-exciting, self-regulating and saturating components. The self-excitement occurs as a result of a past event, which causes a temporary rise in vulner ability to future events. The self-regulation occurs as a result of an external inspection which temporarily lowers vulnerability to future events. RPPs can saturate when too many events or inspections occur close together, which ensures that the probability of an event stays within a realistic range. Two of the operational challenges for power companies are (i) making continuous-time failure predictions, and (ii) cost/benefit analysis for decision making and proactive maintenance. RPPs are naturally suited for handling both of these challenges. We use the model to predict power-grid failures in Manhattan over a short-term horizon, and to provide a cost/benefit analysis of different proactive maintenance programs.Comment: Published at http://dx.doi.org/10.1214/14-AOAS789 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore