12 research outputs found

    Scalable fault management architecture for dynamic optical networks : an information-theoretic approach

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.MIT Barker Engineering Library copy: printed in pages.Also issued printed in pages.Includes bibliographical references (leaves 255-262).All-optical switching, in place of electronic switching, of high data-rate lightpaths at intermediate nodes is one of the key enabling technologies for economically scalable future data networks. This replacement of electronic switching with optical switching at intermediate nodes, however, presents new challenges for fault detection and localization in reconfigurable all-optical networks. Presently, fault detection and localization techniques, as implemented in SONET/G.709 networks, rely on electronic processing of parity checks at intermediate nodes. If similar techniques are adapted to all-optical reconfigurable networks, optical signals need to be tapped out at intermediate nodes for parity checks. This additional electronic processing would break the all-optical transparency paradigm and thus significantly diminish the cost advantages of all-optical networks. In this thesis, we propose new fault-diagnosis approaches specifically tailored to all-optical networks, with an objective of keeping the diagnostic capital expenditure and the diagnostic operation effort low. Instead of the aforementioned passive monitoring paradigm based on parity checks, we propose a proactive lightpath probing paradigm: optical probing signals are sent along a set of lightpaths in the network, and network state (i.e., failure pattern) is then inferred from testing results of this set of end-to-end lightpath measurements. Moreover, we assume that a subset of network nodes (up to all the nodes) is equipped with diagnostic agents - including both transmitters/receivers for probe transmission/detection and software processes for probe management to perform fault detection and localization. The design objectives of this proposed proactive probing paradigm are two folded: i) to minimize the number of lightpath probes to keep the diagnostic operational effort low, and ii) to minimize the number of diagnostic hardware to keep the diagnostic capital expenditure low.(cont.) The network fault-diagnosis problem can be mathematically modeled with a group testing-over-graphs framework. In particular, the network is abstracted as a graph in which the failure status of each node/link is modeled with a random variable (e.g. Bernoulli distribution). A probe over any path in the graph results in a value, defined as the probe syndrome, which is a function of all the random variables associated in that path. A network failure pattern is inferred through a set of probe syndromes resulting from a set of optimally chosen probes. This framework enriches the traditional group-testing problem by introducing a topological structure, and can be extended to model many other network-monitoring problems (e.g., packet delay, packet drop ratio, noise and etc) by choosing appropriate state variables. Under the group-testing-over-graphs framework with a probabilistic failure model, we initiate an information-theoretic approach to minimizing the average number of lightpath probes to identify all possible network failure patterns. Specifically, we have established an isomorphic mapping between the fault-diagnosis problem in network management and the source-coding problem in Information Theory. This mapping suggests that the minimum average number of lightpath probes required is lower bounded by the information entropy of the network state and efficient source-coding algorithms (e.g. the run-length code) can be translated into scalable fault-diagnosis schemes under some additional probe feasibility constraint. Our analytical and numerical investigations yield a guideline for designing scalable fault-diagnosis algorithms: each probe should provide approximately 1-bit of state information, and thus the total number of probes required is approximately equal to the entropy of the network state.(cont.) To address the hardware cost of diagnosis, we also developed a probabilistic analysis framework to characterize the trade-off between hardware cost (i.e., the number of nodes equipped with Tx/Rx pairs) and diagnosis capability (i.e., the probability of successful failure detection and localization). Our results suggest that, for practical situations, the hardware cost can be reduced significantly by accepting a small amount of uncertainty about the failure status.by Yonggang Wen.Ph.D

    Real-Time Fault Diagnosis of Permanent Magnet Synchronous Motor and Drive System

    Get PDF
    Permanent Magnet Synchronous Motors (PMSMs) have gained massive popularity in industrial applications such as electric vehicles, robotic systems, and offshore industries due to their merits of efficiency, power density, and controllability. PMSMs working in such applications are constantly exposed to electrical, thermal, and mechanical stresses, resulting in different faults such as electrical, mechanical, and magnetic faults. These faults may lead to efficiency reduction, excessive heat, and even catastrophic system breakdown if not diagnosed in time. Therefore, developing methods for real-time condition monitoring and detection of faults at early stages can substantially lower maintenance costs, downtime of the system, and productivity loss. In this dissertation, condition monitoring and detection of the three most common faults in PMSMs and drive systems, namely inter-turn short circuit, demagnetization, and sensor faults are studied. First, modeling and detection of inter-turn short circuit fault is investigated by proposing one FEM-based model, and one analytical model. In these two models, efforts are made to extract either fault indicators or adjustments for being used in combination with more complex detection methods. Subsequently, a systematic fault diagnosis of PMSM and drive system containing multiple faults based on structural analysis is presented. After implementing structural analysis and obtaining the redundant part of the PMSM and drive system, several sequential residuals are designed and implemented based on the fault terms that appear in each of the redundant sets to detect and isolate the studied faults which are applied at different time intervals. Finally, real-time detection of faults in PMSMs and drive systems by using a powerful statistical signal-processing detector such as generalized likelihood ratio test is investigated. By using generalized likelihood ratio test, a threshold was obtained based on choosing the probability of a false alarm and the probability of detection for each detector based on which decision was made to indicate the presence of the studied faults. To improve the detection and recovery delay time, a recursive cumulative GLRT with an adaptive threshold algorithm is implemented. As a result, a more processed fault indicator is achieved by this recursive algorithm that is compared to an arbitrary threshold, and a decision is made in real-time performance. The experimental results show that the statistical detector is able to efficiently detect all the unexpected faults in the presence of unknown noise and without experiencing any false alarm, proving the effectiveness of this diagnostic approach.publishedVersio

    Layering as Optimization Decomposition: A Mathematical Theory of Network Architectures

    Full text link

    The 1993 Goddard Conference on Space Applications of Artificial Intelligence

    Get PDF
    This publication comprises the papers presented at the 1993 Goddard Conference on Space Applications of Artificial Intelligence held at the NASA/Goddard Space Flight Center, Greenbelt, MD on May 10-13, 1993. The purpose of this annual conference is to provide a forum in which current research and development directed at space applications of artificial intelligence can be presented and discussed

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    A general technique to establish the asymptotic conditional diagnosability of interconnection networks

    No full text
    AbstractWe develop a general and demonstrably widely applicable technique for determining the asymptotic conditional diagnosability of interconnection networks prevalent within parallel computing under the comparison diagnosis model. We apply our technique to replicate (yet extend) existing results for hypercubes and k-ary n-cubes before going on to obtain new results as regards folded hypercubes, pancake graphs and augmented cubes. In particular, we show that the asymptotic conditional diagnosability of: folded hypercubes {FQn} is 3n−2, pancake graphs {Pn} is 3n−7, and augmented cubes {AQn} is 6n−17. We demonstrate how our technique is independent of structural properties of the interconnection network G in question and essentially only dependent upon the minimal size of the neighbourhood of a path of length 2 in G, the number of neighbours any two distinct vertices of G have in common, and the minimal degree of any vertex in G

    Bibliography of Lewis Research Center technical publications announced in 1993

    Get PDF
    This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1993. All the publications were announced in the 1993 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses

    Combining SOA and BPM Technologies for Cross-System Process Automation

    Get PDF
    This paper summarizes the results of an industry case study that introduced a cross-system business process automation solution based on a combination of SOA and BPM standard technologies (i.e., BPMN, BPEL, WSDL). Besides discussing major weaknesses of the existing, custom-built, solution and comparing them against experiences with the developed prototype, the paper presents a course of action for transforming the current solution into the proposed solution. This includes a general approach, consisting of four distinct steps, as well as specific action items that are to be performed for every step. The discussion also covers language and tool support and challenges arising from the transformation
    corecore