17,626 research outputs found

    Maximizing Reliability in WDM Networks through Lightpath Routing

    Get PDF
    We study the reliability maximization problem in WDM networks with random link failures. Reliability in these networks is defined as the probability that the logical network is connected, and it is determined by the underlying lightpath routing and the link failure probability. We show that in general the optimal lightpath routing depends on the link failure probability, and characterize the properties of lightpath routings that maximize the reliability in different failure probability regimes. In particular, we show that in the low failure probability regime, maximizing the “cross-layer” min cut of the (layered) network maximizes reliability, whereas in the high failure probability regime, minimizing the spanning tree of the network maximizes reliability. Motivated by these results, we develop lightpath routing algorithms for reliability maximization.National Science Foundation (U.S.) (Grant CNS-0830961)National Science Foundation (U.S.) (Grant CNS-1017800)United States. Defense Threat Reduction Agency (Grant HDTRA1-07-1-0004)United States. Defense Threat Reduction Agency (Grant HDTRA-09-1-0050

    Survivability in layered networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 195-204).In layered networks, a single failure at the lower (physical) layer may cause multiple failures at the upper (logical) layer. As a result, traditional schemes that protect against single failures may not be effective in layered networks. This thesis studies the problem of maximizing network survivability in the layered setting, with a focus on optimizing the embedding of the logical network onto the physical network. In the first part of the thesis, we start with an investigation of the fundamental properties of layered networks, and show that basic network connectivity structures, such as cuts, paths and spanning trees, exhibit fundamentally different characteristics from their single-layer counterparts. This leads to our development of a new crosslayer survivability metric that properly quantifies the resilience of the layered network against physical failures. Using this new metric, we design algorithms to embed the logical network onto the physical network based on multi-commodity flows, to maximize the cross-layer survivability. In the second part of the thesis, we extend our model to a random failure setting and study the cross-layer reliability of the networks, defined to be the probability that the upper layer network stays connected under the random failure events. We generalize the classical polynomial expression for network reliability to the layered setting. Using Monte-Carlo techniques, we develop efficient algorithms to compute an approximate polynomial expression for reliability, as a function of the link failure probability. The construction of the polynomial eliminates the need to resample when the cross-layer reliability under different link failure probabilities is assessed. Furthermore, the polynomial expression provides important insight into the connection between the link failure probability, the cross-layer reliability and the structure of a layered network. We show that in general the optimal embedding depends on the link failure probability, and characterize the properties of embeddings that maximize the reliability under different failure probability regimes. Based on these results, we propose new iterative approaches to improve the reliability of the layered networks. We demonstrate via extensive simulations that these new approaches result in embeddings with significantly higher reliability than existing algorithms.by Kayi Lee.Ph.D

    Maximizing Reliability in WDM Networks Through Lightpath Routing

    Get PDF
    We study the reliability maximization problem in wavelength division multiplexing (WDM) networks with random link failures. Reliability in these networks is defined as the probability that the logical network is connected, and it is determined by the underlying lightpath routing, network topologies, and the link failure probability. By introducing the notion of lexicographical ordering for lightpath routings, we characterize precise optimization criteria for maximum reliability in the low failure probability regime. Based on the optimization criteria, we develop lightpath routing algorithms that maximize the reliability, and logical topology augmentation algorithms for further improving reliability. We also study the reliability maximization problem in the high failure probability regime.National Science Foundation (U.S.) (Grant CNS-0830961)National Science Foundation (U.S.) (Grant CNS-1017800)United States. Defense Threat Reduction Agency (Grant HDTRA1-07-1-0004)United States. Defense Threat Reduction Agency (Grant HDTRA-09-1-0050

    Fault-Tolerant, but Paradoxical Path-Finding in Physical and Conceptual Systems

    Full text link
    We report our initial investigations into reliability and path-finding based models and propose future areas of interest. Inspired by broken sidewalks during on-campus construction projects, we develop two models for navigating this "unreliable network." These are based on a concept of "accumulating risk" backward from the destination, and both operate on directed acyclic graphs with a probability of failure associated with each edge. The first serves to introduce and has faults addressed by the second, more conservative model. Next, we show a paradox when these models are used to construct polynomials on conceptual networks, such as design processes and software development life cycles. When the risk of a network increases uniformly, the most reliable path changes from wider and longer to shorter and narrower. If we let professional inexperience--such as with entry level cooks and software developers--represent probability of edge failure, does this change in path imply that the novice should follow instructions with fewer "back-up" plans, yet those with alternative routes should be followed by the expert?Comment: 8 page

    The failure tolerance of mechatronic software systems to random and targeted attacks

    Full text link
    This paper describes a complex networks approach to study the failure tolerance of mechatronic software systems under various types of hardware and/or software failures. We produce synthetic system architectures based on evidence of modular and hierarchical modular product architectures and known motifs for the interconnection of physical components to software. The system architectures are then subject to various forms of attack. The attacks simulate failure of critical hardware or software. Four types of attack are investigated: degree centrality, betweenness centrality, closeness centrality and random attack. Failure tolerance of the system is measured by a 'robustness coefficient', a topological 'size' metric of the connectedness of the attacked network. We find that the betweenness centrality attack results in the most significant reduction in the robustness coefficient, confirming betweenness centrality, rather than the number of connections (i.e. degree), as the most conservative metric of component importance. A counter-intuitive finding is that "designed" system architectures, including a bus, ring, and star architecture, are not significantly more failure-tolerant than interconnections with no prescribed architecture, that is, a random architecture. Our research provides a data-driven approach to engineer the architecture of mechatronic software systems for failure tolerance.Comment: Proceedings of the 2013 ASME International Design Engineering Technical Conferences & Computers and Information in Engineering Conference IDETC/CIE 2013 August 4-7, 2013, Portland, Oregon, USA (In Print

    Contrasting Views of Complexity and Their Implications For Network-Centric Infrastructures

    Get PDF
    There exists a widely recognized need to better understand and manage complex “systems of systems,” ranging from biology, ecology, and medicine to network-centric technologies. This is motivating the search for universal laws of highly evolved systems and driving demand for new mathematics and methods that are consistent, integrative, and predictive. However, the theoretical frameworks available today are not merely fragmented but sometimes contradictory and incompatible. We argue that complexity arises in highly evolved biological and technological systems primarily to provide mechanisms to create robustness. However, this complexity itself can be a source of new fragility, leading to “robust yet fragile” tradeoffs in system design. We focus on the role of robustness and architecture in networked infrastructures, and we highlight recent advances in the theory of distributed control driven by network technologies. This view of complexity in highly organized technological and biological systems is fundamentally different from the dominant perspective in the mainstream sciences, which downplays function, constraints, and tradeoffs, and tends to minimize the role of organization and design
    • 

    corecore