522,727 research outputs found

    Robustness of double-layer group-dependent combat network with cascading failure

    Full text link
    The networked combat system-of-system (CSOS) is the trend of combat development with the innovation of technology. The achievement of combat effectiveness requires CSOS to have a good ability to deal with external interference. Here we report a modeling method of CSOS from the perspective of complex networks and explore the robustness of the combat network based on this. Firstly, a more realistic double-layer heterogeneous dependent combat network model is established. Then, the conditional group dependency situation is considered to design failure rules for dependent failure, and the coupling relation between the double-layer subnets is analyzed for cascading failure. Based on this, the initial load and capacity of the node are defined, respectively, as well as the load redistribution strategy and the status judgment rules for the cascading failure model. Simulation experiments are carried out by changing the attack modes and different parameters, and the results show that the robustness of the combat network can be effectively improved by improving the tolerance limit of one-way dependency of the functional net, the node capacity of the functional subnet and the tolerance of the overload state. The conclusions of this paper can provide a useful reference for network structure optimization and network security protection in the military field

    Joint dimensioning of server and network infrastructure for resilient optical grids/clouds

    Get PDF
    We address the dimensioning of infrastructure, comprising both network and server resources, for large-scale decentralized distributed systems such as grids or clouds. We design the resulting grid/cloud to be resilient against network link or server failures. To this end, we exploit relocation: Under failure conditions, a grid job or cloud virtual machine may be served at an alternate destination (i.e., different from the one under failure-free conditions). We thus consider grid/cloud requests to have a known origin, but assume a degree of freedom as to where they end up being served, which is the case for grid applications of the bag-of-tasks (BoT) type or hosted virtual machines in the cloud case. We present a generic methodology based on integer linear programming (ILP) that: 1) chooses a given number of sites in a given network topology where to install server infrastructure; and 2) determines the amount of both network and server capacity to cater for both the failure-free scenario and failures of links or nodes. For the latter, we consider either failure-independent (FID) or failure-dependent (FD) recovery. Case studies on European-scale networks show that relocation allows considerable reduction of the total amount of network and server resources, especially in sparse topologies and for higher numbers of server sites. Adopting a failure-dependent backup routing strategy does lead to lower resource dimensions, but only when we adopt relocation (especially for a high number of server sites): Without exploiting relocation, potential savings of FD versus FID are not meaningful

    Modelling Interdependent Cascading Failures in Real World Complex Networks using a Functional Dependency Model

    Get PDF
    Infrastructure systems are becoming increasingly complex and interdependent. As a result our ability to predict the likelihood of large-scale failure of these systems has significantly diminished and the consequence of this is that we now have a greatly increased risk of devastating impacts to society. Traditionally these systems have been analysed using physically-based models. However, this approach can only provide information for a specific network and is limited by the number of scenarios that can be tested. In an attempt to overcome this shortcoming, many studies have used network graph theory to provide an alternative analysis approach. This approach has tended to consider infrastructure systems in isolation, but has recently considered the analysis of interdependent networks through combination with percolation theory. However, these studies have focused on the analysis of synthetic networks and tend to only consider the topology of the system. In this paper we develop a new analysis approach, based upon network theory, but accounting for the hierarchical structure and functional dependency observed in real world infrastructure networks. We apply this method to two real world networks, to show that it can be used to quantify the impact that failures within an electricity network have upon a dependent water network

    Inter-similarity between coupled networks

    Full text link
    Recent studies have shown that a system composed from several randomly interdependent networks is extremely vulnerable to random failure. However, real interdependent networks are usually not randomly interdependent, rather a pair of dependent nodes are coupled according to some regularity which we coin inter-similarity. For example, we study a system composed from an interdependent world wide port network and a world wide airport network and show that well connected ports tend to couple with well connected airports. We introduce two quantities for measuring the level of inter-similarity between networks (i) Inter degree-degree correlation (IDDC) (ii) Inter-clustering coefficient (ICC). We then show both by simulation models and by analyzing the port-airport system that as the networks become more inter-similar the system becomes significantly more robust to random failure.Comment: 4 pages, 3 figure

    Non-Stationary Random Process for Large-Scale Failure and Recovery of Power Distributions

    Full text link
    A key objective of the smart grid is to improve reliability of utility services to end users. This requires strengthening resilience of distribution networks that lie at the edge of the grid. However, distribution networks are exposed to external disturbances such as hurricanes and snow storms where electricity service to customers is disrupted repeatedly. External disturbances cause large-scale power failures that are neither well-understood, nor formulated rigorously, nor studied systematically. This work studies resilience of power distribution networks to large-scale disturbances in three aspects. First, a non-stationary random process is derived to characterize an entire life cycle of large-scale failure and recovery. Second, resilience is defined based on the non-stationary random process. Close form analytical expressions are derived under specific large-scale failure scenarios. Third, the non-stationary model and the resilience metric are applied to a real life example of large-scale disruptions due to Hurricane Ike. Real data on large-scale failures from an operational network is used to learn time-varying model parameters and resilience metrics.Comment: 11 pages, 8 figures, submitted to IEEE Sig. Pro

    A probabilistic model for information and sensor validation

    Get PDF
    This paper develops a new theory and model for information and sensor validation. The model represents relationships between variables using Bayesian networks and utilizes probabilistic propagation to estimate the expected values of variables. If the estimated value of a variable differs from the actual value, an apparent fault is detected. The fault is only apparent since it may be that the estimated value is itself based on faulty data. The theory extends our understanding of when it is possible to isolate real faults from potential faults and supports the development of an algorithm that is capable of isolating real faults without deferring the problem to the use of expert provided domain-specific rules. To enable practical adoption for real-time processes, an any time version of the algorithm is developed, that, unlike most other algorithms, is capable of returning improving assessments of the validity of the sensors as it accumulates more evidence with time. The developed model is tested by applying it to the validation of temperature sensors during the start-up phase of a gas turbine when conditions are not stable; a problem that is known to be challenging. The paper concludes with a discussion of the practical applicability and scalability of the model

    Conversion and verification procedure for goal-based control programs

    Get PDF
    Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System, developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is developed and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems
    corecore