4,244 research outputs found

    Management and Control of Scalable and Resilient Next-Generation Optical Networks

    Get PDF
    Two research topics in next-generation optical networks with wavelength-division multiplexing (WDM) technologies were investigated: (1) scalability of network management and control, and (2) resilience/reliability of networks upon faults and attacks. In scalable network management, the scalability of management information for inter-domain light-path assessment was studied. The light-path assessment was formulated as a decision problem based on decision theory and probabilistic graphical models. It was found that partial information available can provide the desired performance, i.e., a small percentage of erroneous decisions can be traded off to achieve a large saving in the amount of management information. In network resilience under malicious attacks, the resilience of all-optical networks under in-band crosstalk attacks was investigated with probabilistic graphical models. Graphical models provide an explicit view of the spatial dependencies in attack propagation, as well as computationally efficient approaches, e.g., sum-product algorithm, for studying network resilience. With the proposed cross-layer model of attack propagation, key factors that affect the resilience of the network from the physical layer and the network layer were identified. In addition, analytical results on network resilience were obtained for typical topologies including ring, star, and mesh-torus networks. In network performance upon failures, traffic-based network reliability was systematically studied. First a uniform deterministic traffic at the network layer was adopted to analyze the impacts of network topology, failure dependency, and failure protection on network reliability. Then a random network layer traffic model with Poisson arrivals was applied to further investigate the effect of network layer traffic distributions on network reliability. Finally, asymptotic results of network reliability metrics with respect to arrival rate were obtained for typical network topologies under heavy load regime. The main contributions of the thesis include: (1) fundamental understandings of scalable management and resilience of next-generation optical networks with WDM technologies; and (2) the innovative application of probabilistic graphical models, an emerging approach in machine learning, to the research of communication networks.Ph.D.Committee Chair: Ji, Chuanyi; Committee Member: Chang, Gee-Kung; Committee Member: McLaughlin, Steven; Committee Member: Ralph, Stephen; Committee Member: Zegura, Elle

    Scalable fault management architecture for dynamic optical networks : an information-theoretic approach

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.MIT Barker Engineering Library copy: printed in pages.Also issued printed in pages.Includes bibliographical references (leaves 255-262).All-optical switching, in place of electronic switching, of high data-rate lightpaths at intermediate nodes is one of the key enabling technologies for economically scalable future data networks. This replacement of electronic switching with optical switching at intermediate nodes, however, presents new challenges for fault detection and localization in reconfigurable all-optical networks. Presently, fault detection and localization techniques, as implemented in SONET/G.709 networks, rely on electronic processing of parity checks at intermediate nodes. If similar techniques are adapted to all-optical reconfigurable networks, optical signals need to be tapped out at intermediate nodes for parity checks. This additional electronic processing would break the all-optical transparency paradigm and thus significantly diminish the cost advantages of all-optical networks. In this thesis, we propose new fault-diagnosis approaches specifically tailored to all-optical networks, with an objective of keeping the diagnostic capital expenditure and the diagnostic operation effort low. Instead of the aforementioned passive monitoring paradigm based on parity checks, we propose a proactive lightpath probing paradigm: optical probing signals are sent along a set of lightpaths in the network, and network state (i.e., failure pattern) is then inferred from testing results of this set of end-to-end lightpath measurements. Moreover, we assume that a subset of network nodes (up to all the nodes) is equipped with diagnostic agents - including both transmitters/receivers for probe transmission/detection and software processes for probe management to perform fault detection and localization. The design objectives of this proposed proactive probing paradigm are two folded: i) to minimize the number of lightpath probes to keep the diagnostic operational effort low, and ii) to minimize the number of diagnostic hardware to keep the diagnostic capital expenditure low.(cont.) The network fault-diagnosis problem can be mathematically modeled with a group testing-over-graphs framework. In particular, the network is abstracted as a graph in which the failure status of each node/link is modeled with a random variable (e.g. Bernoulli distribution). A probe over any path in the graph results in a value, defined as the probe syndrome, which is a function of all the random variables associated in that path. A network failure pattern is inferred through a set of probe syndromes resulting from a set of optimally chosen probes. This framework enriches the traditional group-testing problem by introducing a topological structure, and can be extended to model many other network-monitoring problems (e.g., packet delay, packet drop ratio, noise and etc) by choosing appropriate state variables. Under the group-testing-over-graphs framework with a probabilistic failure model, we initiate an information-theoretic approach to minimizing the average number of lightpath probes to identify all possible network failure patterns. Specifically, we have established an isomorphic mapping between the fault-diagnosis problem in network management and the source-coding problem in Information Theory. This mapping suggests that the minimum average number of lightpath probes required is lower bounded by the information entropy of the network state and efficient source-coding algorithms (e.g. the run-length code) can be translated into scalable fault-diagnosis schemes under some additional probe feasibility constraint. Our analytical and numerical investigations yield a guideline for designing scalable fault-diagnosis algorithms: each probe should provide approximately 1-bit of state information, and thus the total number of probes required is approximately equal to the entropy of the network state.(cont.) To address the hardware cost of diagnosis, we also developed a probabilistic analysis framework to characterize the trade-off between hardware cost (i.e., the number of nodes equipped with Tx/Rx pairs) and diagnosis capability (i.e., the probability of successful failure detection and localization). Our results suggest that, for practical situations, the hardware cost can be reduced significantly by accepting a small amount of uncertainty about the failure status.by Yonggang Wen.Ph.D

    Proteomic analysis identifies key differences in the cardiac interactomes of dystrophin and micro-dystrophin

    Get PDF
    ΔR4-R23/ΔCT micro-dystrophin (μDys) is a miniaturized version of dystrophin currently evaluated in a Duchenne muscular dystrophy (DMD) gene therapy trial to treat skeletal and cardiac muscle disease. In pre-clinical studies, μDys efficiently rescues cardiac histopathology, but only partially normalizes cardiac function. To gain insights into factors that may impact the cardiac therapeutic efficacy of μDys, we compared by mass spectrometry the composition of purified dystrophin and μDys protein complexes in the mouse heart. We report that compared to dystrophin, μDys has altered associations with α1- and β2-syntrophins, as well as cavins, a group of caveolae-associated signaling proteins. In particular, we found that membrane localization of cavins −1 and − 4 in cardiomyocytes requires dystrophin and is profoundly disrupted in the heart of mdx^{5cv} mice,a model of DMD. Following cardiac stress/damage, membrane-associated cavin-4 recruits the signaling molecule ERK to caveolae, which activates key cardio-protective responses. Evaluation of ERK signaling revealed a profound inhibition, below physiological baseline, in the mdx^{5cv} mouse heart. Expression of μDys in mdx^{5cv} mice prevented the development of cardiac histopathology but did not rescue membrane localization of cavins nor did it normalize ERK signaling. Our study provides the first comparative analysis of purified protein complexes assembled in vivo by full-length dystrophin and a therapeutic micro-dystrophin construct. This has revealed disruptions in cavins and ERK signaling that may contribute to DMD cardiomyopathy. This new knowledge is important for ongoing efforts to prevent and treat heart disease in DMD patients

    Optimization Methods for Optical Long-Haul and Access Networks

    Get PDF
    Optical communications based on fiber optics and the associated technologies have seen remarkable progress over the past two decades. Widespread deployment of optical fiber has been witnessed in backbone and metro networks as well as access segments connecting to customer premises and homes. Designing and developing a reliable, robust and efficient end-to-end optical communication system have thus emerged as topics of utmost importance both to researchers and network operators. To fulfill these requirements, various problems have surfaced and received attention, such as network planning, capacity placement, traffic grooming, traffic scheduling, and bandwidth allocation. The optimal network design aims at addressing (one or more of) these problems based on some optimization objectives. In this thesis, we consider two of the most important problems in optical networks; namely the survivability in optical long-haul networks and the problem of bandwidth allocation and scheduling in optical access networks. For the former, we present efficient and accurate models for availability-aware design and service provisioning in p-cycle based survivable networks. We also derive optimization models for survivable network design based on p-trail, a more general protection structure, and compare its performance with p-cycles. Indeed, major cost savings can be obtained when the optical access and long-haul subnetworks become closer to each other by means of consolidation of access and metro networks. As this distance between long-haul and access networks reduces, and the need and expectations from passive optical access networks (PONs) soar, it becomes crucial to efficiently manage bandwidth in the access while providing the desired level of service availability in the long-haul backbone. We therefore address in this thesis the problem of bandwidth management and scheduling in passive optical networks; we design efficient joint and non-joint scheduling and bandwidth allocation methods for multichannel PON as well as next generation 10Gbps Ethernet PON (10G-EPON) while addressing the problem of coexistence between 10G-EPONs and multichannel PONs
    corecore