9 research outputs found

    High-reliability architectures for networks under stress

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2003.Includes bibliographical references (p. 157-165).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.In this thesis, we develop a methodology for architecting high-reliability communication networks. Previous results in the network reliability field are mostly theoretical in nature with little immediate applicability to the design of real networks. We bring together these contributions and develop new results and insights which are of value in designing networks that meet prescribed levels of reliability. Furthermore, most existing results assume that component failures are statistically independent in nature. We take initial steps in developing a methodology for the design of networks with statistically dependent link failures. We also study the architectures of networks under extreme stress.by Guy E. Weichenberg.S.M

    Resilient Wireless Sensor Networks Using Topology Control: A Review

    Get PDF
    Wireless sensor networks (WSNs) may be deployed in failure-prone environments, and WSNs nodes easily fail due to unreliable wireless connections, malicious attacks and resource-constrained features. Nevertheless, if WSNs can tolerate at most losing k − 1 nodes while the rest of nodes remain connected, the network is called k − connected. k is one of the most important indicators for WSNs’ self-healing capability. Following a WSN design flow, this paper surveys resilience issues from the topology control and multi-path routing point of view. This paper provides a discussion on transmission and failure models, which have an important impact on research results. Afterwards, this paper reviews theoretical results and representative topology control approaches to guarantee WSNs to be k − connected at three different network deployment stages: pre-deployment, post-deployment and re-deployment. Multi-path routing protocols are discussed, and many NP-complete or NP-hard problems regarding topology control are identified. The challenging open issues are discussed at the end. This paper can serve as a guideline to design resilient WSNs

    Management and Control of Scalable and Resilient Next-Generation Optical Networks

    Get PDF
    Two research topics in next-generation optical networks with wavelength-division multiplexing (WDM) technologies were investigated: (1) scalability of network management and control, and (2) resilience/reliability of networks upon faults and attacks. In scalable network management, the scalability of management information for inter-domain light-path assessment was studied. The light-path assessment was formulated as a decision problem based on decision theory and probabilistic graphical models. It was found that partial information available can provide the desired performance, i.e., a small percentage of erroneous decisions can be traded off to achieve a large saving in the amount of management information. In network resilience under malicious attacks, the resilience of all-optical networks under in-band crosstalk attacks was investigated with probabilistic graphical models. Graphical models provide an explicit view of the spatial dependencies in attack propagation, as well as computationally efficient approaches, e.g., sum-product algorithm, for studying network resilience. With the proposed cross-layer model of attack propagation, key factors that affect the resilience of the network from the physical layer and the network layer were identified. In addition, analytical results on network resilience were obtained for typical topologies including ring, star, and mesh-torus networks. In network performance upon failures, traffic-based network reliability was systematically studied. First a uniform deterministic traffic at the network layer was adopted to analyze the impacts of network topology, failure dependency, and failure protection on network reliability. Then a random network layer traffic model with Poisson arrivals was applied to further investigate the effect of network layer traffic distributions on network reliability. Finally, asymptotic results of network reliability metrics with respect to arrival rate were obtained for typical network topologies under heavy load regime. The main contributions of the thesis include: (1) fundamental understandings of scalable management and resilience of next-generation optical networks with WDM technologies; and (2) the innovative application of probabilistic graphical models, an emerging approach in machine learning, to the research of communication networks.Ph.D.Committee Chair: Ji, Chuanyi; Committee Member: Chang, Gee-Kung; Committee Member: McLaughlin, Steven; Committee Member: Ralph, Stephen; Committee Member: Zegura, Elle

    Autonomous routing algorithms for networks with wide-spread failures : a case for differential backlog routing

    Get PDF
    Thesis (M. Eng.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 109-112).We study the performance of a differential backlog routing algorithm in a network with random failures. Differential Backlog routing is a novel routing algorithm where packets are routed through multiple paths to their destinations based on queue backlog information. It is known that Differential Backlog routing maximizes the throughput capacity of the network; however little is known about practical implementations of Differential Backlog routing; and its delay performance. We compare Differential Backlog to Shortest Path and show that Differential Backlog routing outperforms Shortest Path when the network is heavily loaded and when the failure rate is high. Moreover, a hybrid routing algorithm that combines principles of both Shortest Path and Differential Backlog routing is presented, and is shown to outperform both. Finally, we demonstrate further improvements in delay performance of the aforementioned hybrid routing algorithm through the use of Digital Fountains.by Wajahat Faheem Khan.M.Eng

    Geographic location and geographic prediction performance benefits for infrastructureless wireless networks

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 173-176).The field of infrastructureless wireless networks (IWNs) is a broad and varied research area with a history of different assumption sets and methods of analysis. Much of the focus in the area of IWNs has been on connectivity and throughput/energy/delay (T/E/D) tradeoffs, which are important and valuable metrics. When specific IWN routing protocols are developed, they are often difficult to characterize analytically. In this thesis we review some of the important results in IWNs, in the process providing a comparison of wideband (power-limited) versus narrowband (interference-limited) networks. We show that the use of geographic location and geographic prediction (GL/GP) can dramatically increase the performance of IWNs. We compare past results in the context of GL/GP and develop new results in this area. We also develop the idea of throughput burden and scaling for the distribution of topology and routing information in IWNs and we hope that this work provides a context in which further research can be performed. We primarily focus our work on wideband networks while also reviewing some narrowband results. In particular, we focus on wideband networks with non-zero processing energy at the nodes, which combines with distance-dependent transmission energy as the other main source of power consumption in the network. Often the research in this area does not take into account processing energy, but there is previous work which shows that processing energy is an important consideration. The consideration of processing energy is the determining factor in whether a whisper to the nearest neighbor (WtNN) or characteristic hop distance routing scheme is optimal. Whisper to the nearest neighbor routing involves taking a large number of short hops, while characteristic hop distance routing is the scheme by which the optimal hop distance is based on the distance dependent transmission energy and the processing energy, as well as the attenuation exponent. For a one-dimensional network, we use a uniform all-to-all traffic model to determine the total hop count and achievable throughput for three routing types: WtNN without GL/GP, WtNN with GL/GP, and characteristic hop distance with GL/GP. We assume a fixed rate system and a random and uniform node distribution. The uniform all-to-all traffic model is the model where every node communicates with every other node at a specified rate. The achievable throughput is the achievable rate at which each source can send data to each of its destinations. The results we develop show that the performance difference between WtNN with and without GL/GP is minimal for one-dimensional networks. We show the reduction in hop count of characteristic hop distance routing compared to WtNN routing is significant. Further, the achievable throughput of characteristic hop distance routing is significantly better than that of WtNN networks. We present a method to determine the link rate scaling necessary for link state distribution to maintain topology and routing information in mobile IWNs. We developed several results, with the main result of rate scaling for two-dimensional networks where every node is mobile. We use a random chord mobility model to represent independent node movement. Our results show that in the absence of GL/GP, there is a significant network burden for maintaining topology and routing information at the network nodes. We also derive real world scaling results using the general analytic results and these results show the poor scaling of networks without GL/GP. For networks of 100 to 1000 nodes, the rate scaling for maintaining topology in mobile wireless networks is on the order of hundreds of megabits to gigabits per second. It is infeasible to use such significant amounts of data rate for the sole purpose of maintaining topology and routing information, and thus some other method of maintaining this information will need to be utilized. Given the growing number of devices connected to the Internet, in the future it is likely that IWNs will become more prevalent in society. Despite the significant amount of research to date, there is still much work to be done to determine the attributes of a realistic and scalable system. In order to ensure the scalability of future systems and decrease the amount of throughput necessary for network maintenance, it will be necessary for such systems to use geographic location and geographic prediction information.by Shane A. Fink.S.M

    Approximation Algorithms for Broadcasting in Simple Graphs with Intersecting Cycles

    Get PDF
    Broadcasting is an information dissemination problem in a connected network in which one node, called the originator, must distribute a message to all other nodes by placing a series of calls along the communication lines of the network. Every time the informed nodes aid the originator in distributing the message. Finding the minimum broadcast time of any vertex in an arbitrary graph is NP-Complete. The problem remains NP-Complete even for planar graphs of degree 3 and for a graph whose vertex set can be partitioned into a clique and an independent set. The best theoretical upper bound gives logarithmic approximation. It has been shown that the broadcasting problem is NP-Hard to approximate within a factor of 3-É›. The polynomial time solvability is shown only for tree-like graphs; trees, unicyclic graphs, tree of cycles, necklace graphs and some graphs where the underlying graph is a clique; such as fully connected trees and tree of cliques. In this thesis we study the broadcast problem in different classes of graphs where cycles intersect in at least one vertex. First we consider broadcasting in a simple graph where several cycles have common paths and two intersecting vertices, called a k-path graph. We present a constant approximation algorithm to find the broadcast time of an arbitrary k-path graph. We also study the broadcast problem in a simple cactus graph called k-cycle graph where several cycles of arbitrary lengths are connected by a central vertex on one end. We design a constant approximation algorithm to find the broadcast time of an arbitrary k-cycle graph. Next we study the broadcast problem in a hypercube of trees for which we present a 2-approximation algorithm for any originator. We provide a linear algorithm to find the broadcast time in hypercube of trees with one tree. We extend the result for any arbitrary graph whose nodes contain trees and design a linear time constant approximation algorithm where the broadcast scheme in the arbitrary graph is already known. In Chapter 6 we study broadcasting in Harary graph for which we present an additive approximation which gives 2-approximation in the worst case to find the broadcast time in an arbitrary Harary graph. Next for even values of n, we introduce a new graph, called modified-Harary graph and present a 1-additive approximation algorithm to find the broadcast time. We also show that a modified-Harary graph is a broadcast graph when k is logarithmic of n. Finally we consider a diameter broadcast problem where we obtain a lower bound on the broadcast time of the graph which has at least (d+k-1 choose d) + 1 vertices that are at a distance d from the originator, where k >= 1

    Scalable fault management architecture for dynamic optical networks : an information-theoretic approach

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.MIT Barker Engineering Library copy: printed in pages.Also issued printed in pages.Includes bibliographical references (leaves 255-262).All-optical switching, in place of electronic switching, of high data-rate lightpaths at intermediate nodes is one of the key enabling technologies for economically scalable future data networks. This replacement of electronic switching with optical switching at intermediate nodes, however, presents new challenges for fault detection and localization in reconfigurable all-optical networks. Presently, fault detection and localization techniques, as implemented in SONET/G.709 networks, rely on electronic processing of parity checks at intermediate nodes. If similar techniques are adapted to all-optical reconfigurable networks, optical signals need to be tapped out at intermediate nodes for parity checks. This additional electronic processing would break the all-optical transparency paradigm and thus significantly diminish the cost advantages of all-optical networks. In this thesis, we propose new fault-diagnosis approaches specifically tailored to all-optical networks, with an objective of keeping the diagnostic capital expenditure and the diagnostic operation effort low. Instead of the aforementioned passive monitoring paradigm based on parity checks, we propose a proactive lightpath probing paradigm: optical probing signals are sent along a set of lightpaths in the network, and network state (i.e., failure pattern) is then inferred from testing results of this set of end-to-end lightpath measurements. Moreover, we assume that a subset of network nodes (up to all the nodes) is equipped with diagnostic agents - including both transmitters/receivers for probe transmission/detection and software processes for probe management to perform fault detection and localization. The design objectives of this proposed proactive probing paradigm are two folded: i) to minimize the number of lightpath probes to keep the diagnostic operational effort low, and ii) to minimize the number of diagnostic hardware to keep the diagnostic capital expenditure low.(cont.) The network fault-diagnosis problem can be mathematically modeled with a group testing-over-graphs framework. In particular, the network is abstracted as a graph in which the failure status of each node/link is modeled with a random variable (e.g. Bernoulli distribution). A probe over any path in the graph results in a value, defined as the probe syndrome, which is a function of all the random variables associated in that path. A network failure pattern is inferred through a set of probe syndromes resulting from a set of optimally chosen probes. This framework enriches the traditional group-testing problem by introducing a topological structure, and can be extended to model many other network-monitoring problems (e.g., packet delay, packet drop ratio, noise and etc) by choosing appropriate state variables. Under the group-testing-over-graphs framework with a probabilistic failure model, we initiate an information-theoretic approach to minimizing the average number of lightpath probes to identify all possible network failure patterns. Specifically, we have established an isomorphic mapping between the fault-diagnosis problem in network management and the source-coding problem in Information Theory. This mapping suggests that the minimum average number of lightpath probes required is lower bounded by the information entropy of the network state and efficient source-coding algorithms (e.g. the run-length code) can be translated into scalable fault-diagnosis schemes under some additional probe feasibility constraint. Our analytical and numerical investigations yield a guideline for designing scalable fault-diagnosis algorithms: each probe should provide approximately 1-bit of state information, and thus the total number of probes required is approximately equal to the entropy of the network state.(cont.) To address the hardware cost of diagnosis, we also developed a probabilistic analysis framework to characterize the trade-off between hardware cost (i.e., the number of nodes equipped with Tx/Rx pairs) and diagnosis capability (i.e., the probability of successful failure detection and localization). Our results suggest that, for practical situations, the hardware cost can be reduced significantly by accepting a small amount of uncertainty about the failure status.by Yonggang Wen.Ph.D

    Optical flow switched networks

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Includes bibliographical references (p. 253-279).In the four decades since optical fiber was introduced as a communications medium, optical networking has revolutionized the telecommunications landscape. It has enabled the Internet as we know it today, and is central to the realization of Network-Centric Warfare in the defense world. Sustained exponential growth in communications bandwidth demand, however, is requiring that the nexus of innovation in optical networking continue, in order to ensure cost-effective communications in the future. In this thesis, we present Optical Flow Switching (OFS) as a key enabler of scalable future optical networks. The general idea behind OFS-agile, end-to-end, all-optical connections-is decades old, if not as old as the field of optical networking itself. However, owing to the absence of an application for it, OFS remained an underdeveloped idea-bereft of how it could be implemented, how well it would perform, and how much it would cost relative to other architectures. The contributions of this thesis are in providing partial answers to these three broad questions. With respect to implementation, we address the physical layer design of OFS in the metro-area and access, and develop sensible scheduling algorithms for OFS communication. Our performance study comprises a comparative capacity analysis for the wide-area, as well as an analytical approximation of the throughput-delay tradeoff offered by OFS for inter-MAN communication. Lastly, with regard to the economics of OFS, we employ an approximate capital expenditure model, which enables a throughput-cost comparison of OFS with other prominent candidate architectures. Our conclusions point to the fact that OFS offers significant advantage over other architectures in economic scalability.(cont.) In particular, for sufficiently heavy traffic, OFS handles large transactions at far lower cost than other optical network architectures. In light of the increasing importance of large transactions in both commercial and defense networks, we conclude that OFS may be crucial to the future viability of optical networking.by Guy E. Weichenberg.Ph.D

    High-Reliability Architectures for Networks under Stress

    No full text
    In this paper, we consider the task of designing a physical network topology that meets a high level of reliability using unreliable network elements. We are motivated by the use of networks, and in particular optical networks, for high-reliability applications which involve unusual and catastrophic stresses. Our network model is one in which nodes are invulnerable and links are subject to failure, and we consider both the case of statistically independent and statistically dependent link failures. Our reliability metrics are the common all-terminal connectedness measure and and the less commonly considered two-terminal connectedness measure. We compare in the low and high stress regimes, via analytical approximations and simulations, common commercial architectures designed for allterminal reliability when links are very reliable with alternative architectures which consider both of our reliability metrics. Furthermore, we show that for independent link failures network design should be optimized with respect to high stress reliability, as low stress reliability is less sensitive to graph structure; and that under high stress, very high node degrees are required to achieve moderate reliability performance
    corecore