7 research outputs found

    Vehicle Integrated Prognostic Reasoner (VIPR) 2010 Annual Final Report

    Get PDF
    Honeywell's Central Maintenance Computer Function (CMCF) and Aircraft Condition Monitoring Function (ACMF) represent the state-of-the art in integrated vehicle health management (IVHM). Underlying these technologies is a fault propagation modeling system that provides nose-to-tail coverage and root cause diagnostics. The Vehicle Integrated Prognostic Reasoner (VIPR) extends this technology to interpret evidence generated by advanced diagnostic and prognostic monitors provided by component suppliers to detect, isolate, and predict adverse events that affect flight safety. This report describes year one work that included defining the architecture and communication protocols and establishing the user requirements for such a system. Based on these and a set of ConOps scenarios, we designed and implemented a demonstration of communication pathways and associated three-tiered health management architecture. A series of scripted scenarios showed how VIPR would detect adverse events before they escalate as safety incidents through a combination of advanced reasoning and additional aircraft data collected from an aircraft condition monitoring system. Demonstrating VIPR capability for cases recorded in the ASIAS database and cross linking them with historical aircraft data is planned for year two

    Survivability aspects of future optical backbone networks

    Get PDF
    In huidige glasvezelnetwerken kan een enkele vezel een gigantische hoeveelheid data dragen, ruwweg het equivalent van 25 miljoen gelijktijdige telefoongesprekken. Hierdoor zullen netwerkstoringen, zoals breuken van een glasvezelkabel, de communicatie van een groot aantal eindgebruikers verstoren. Netwerkoperatoren kiezen er dan ook voor om hun netwerk zo te bouwen dat zulke grote storingen automatisch opgevangen worden. Dit proefschrift spitst zich toe op twee aspecten rond de overleefbaarheid in toekomstige optische netwerken. De eerste doelstelling die beoogd wordt is het tot stand brengen vanrobuuste dataverbindingen over meerdere netwerken. Door voldoende betrouwbare verbindingen tot stand te brengen over een infrastructuur die niet door een enkele entiteit wordt beheerd kan men bv. weredwijd Internettelevisie van hoge kwaliteit aanbieden. De bestudeerde oplossing heeft niet enkel tot doel om deze zeer betrouwbare verbinding te berekenen, maar ook om dit te bewerkstelligen met een minimum aan gebruikte netwerkcapaciteit. De tweede doelstelling was om een antwoord te formuleren om de vraag hoe het toepassen van optische schakelsystemen gebaseerd op herconfigureerbare optische multiplexers een impact heeft op de overleefbaarheid van een optisch netwerk. Bij lagere volumes hebben optisch geschakelde netwerken weinig voordeel van dergelijke gesofistikeerde methoden. Elektronisch geschakelde netwerken vertonen geen afhankelijkheid van het datavolume en hebben altijd baat bij optimalisatie

    The viability of IS enhanced knowledge sharing in mission-critical command and control centers

    Get PDF
    Engineering processes such as the maintenance of mission-critical infrastructures are highly unpredictable processes that are vital for everyday life, as well as for national security goals. These processes are categorized as Emergent Knowledge Processes (EKP), organizational processes that are characterized by a changing set of actors, distributed knowledge bases, and emergent knowledge sharing activities where the process itself has no predetermined structure. The research described here utilizes the telecommunications network fault diagnosis process as a specific example of an EKP. The field site chosen for this research is a global undersea telecommunication network where nodes are staffed by trained personnel responsible for maintaining local equipment using Network Management Systems. The overall network coordination responsibilities are handled by a centralized command and control center, or Network Management Center. A formal case study is performed in this global telecommunications network to evaluate the design of an Alarm Correlation Tool (ACT). This work defines a design methodology for an Information System (IS) that can support complex engineering diagnosis processes. As such, a Decision Support System design model is used to iterate through a number of design theories that guide design decisions. Utilizing the model iterations, it is found that IS design theories such as Decision Support Systems (DSS), Expert Systems (ES) and Knowledge Management Systems (KMS) design theories, do not produce systems appropriate for supporting complex engineering processes. A design theory for systems that support EKPs is substituted as the project\u27s driving theory during the final iterations of the DSS Design Model. This design theory poses the use of naive users to support the design process as one of its key principles. The EKP design theory principles are evaluated and addressed to provide feedback to this recently introduced Information System Design Theory. The research effort shows that use of the EKP design theory is also insufficient in designing complex engineering systems. As a result, the main contribution of this work is to augment design theory with a methodology that revolves around the analysis of the knowledge management and control environment as a driving force behind IS design. Finally, the research results show that a model-based knowledge captunng algorithm provides an appropriate vehicle to capture and manipulate experiential engineering knowledge. In addition, it is found that the proposed DSS Design Model assists in the refinement of highly complex system designs. The results also show that the EKP design theory is not sufficient to address all the challenges posed by systems that must support mission-critical infrastructures

    Fault Detection and Identification in Computer Networks: A soft Computing Approach

    Get PDF
    Governmental and private institutions rely heavily on reliable computer networks for their everyday business transactions. The downtime of their infrastructure networks may result in millions of dollars in cost. Fault management systems are used to keep today’s complex networks running without significant downtime cost, either by using active techniques or passive techniques. Active techniques impose excessive management traffic, whereas passive techniques often ignore uncertainty inherent in network alarms,leading to unreliable fault identification performance. In this research work, new algorithms are proposed for both types of techniques so as address these handicaps. Active techniques use probing technology so that the managed network can be tested periodically and suspected malfunctioning nodes can be effectively identified and isolated. However, the diagnosing probes introduce extra management traffic and storage space. To address this issue, two new CSP (Constraint Satisfaction Problem)-based algorithms are proposed to minimize management traffic, while effectively maintain the same diagnostic power of the available probes. The first algorithm is based on the standard CSP formulation which aims at reducing the available dependency matrix significantly as means to reducing the number of probes. The obtained probe set is used for fault detection and fault identification. The second algorithm is a fuzzy CSP-based algorithm. This proposed algorithm is adaptive algorithm in the sense that an initial reduced fault detection probe set is utilized to determine the minimum set of probes used for fault identification. Based on the extensive experiments conducted in this research both algorithms have demonstrated advantages over existing methods in terms of the overall management traffic needed to successfully monitor the targeted network system. Passive techniques employ alarms emitted by network entities. However, the fault evidence provided by these alarms can be ambiguous, inconsistent, incomplete, and random. To address these limitations, alarms are correlated using a distributed Dempster-Shafer Evidence Theory (DSET) framework, in which the managed network is divided into a cluster of disjoint management domains. Each domain is assigned an Intelligent Agent for collecting and analyzing the alarms generated within that domain. These agents are coordinated by a single higher level entity, i.e., an agent manager that combines the partial views of these agents into a global one. Each agent employs DSET-based algorithm that utilizes the probabilistic knowledge encoded in the available fault propagation model to construct a local composite alarm. The Dempster‘s rule of combination is then used by the agent manager to correlate these local composite alarms. Furthermore, an adaptive fuzzy DSET-based algorithm is proposed to utilize the fuzzy information provided by the observed cluster of alarms so as to accurately identify the malfunctioning network entities. In this way, inconsistency among the alarms is removed by weighing each received alarm against the others, while randomness and ambiguity of the fault evidence are addressed within soft computing framework. The effectiveness of this framework has been investigated based on extensive experiments. The proposed fault management system is able to detect malfunctioning behavior in the managed network with considerably less management traffic. Moreover, it effectively manages the uncertainty property intrinsically contained in network alarms,thereby reducing its negative impact and significantly improving the overall performance of the fault management system

    Automated IT Service Fault Diagnosis Based on Event Correlation Techniques

    Get PDF
    In the previous years a paradigm shift in the area of IT service management could be witnessed. IT management does not only deal with the network, end systems, or applications anymore, but is more and more concerned with IT services. This is caused by the need of organizations to monitor the efficiency of internal IT departments and to have the possibility to subscribe IT services from external providers. This trend has raised new challenges in the area of IT service management, especially with respect to service level agreements laying down the quality of service to be guaranteed by a service provider. Fault management is also facing new challenges which are related to ensuring the compliance to these service level agreements. For example, a high utilization of network links in the infrastructure can imply a delay increase in the delivery of services with respect to agreed time constraints. Such relationships have to be detected and treated in a service-oriented fault diagnosis which therefore does not deal with faults in a narrow sense, but with service quality degradations. This thesis aims at providing a concept for service fault diagnosis which is an important part of IT service fault management. At first, a motivation of the need of further examinations regarding this issue is given which is based on the analysis of services offered by a large IT service provider. A generalization of the scenario forms the basis for the specification of requirements which are used for a review of related research work and commercial products. Even though some solutions for particular challenges have already been provided, a general approach for service fault diagnosis is still missing. For addressing this issue, a framework is presented in the main part of this thesis using an event correlation component as its central part. Event correlation techniques which have been successfully applied to fault management in the area of network and systems management are adapted and extended accordingly. Guidelines for the application of the framework to a given scenario are provided afterwards. For showing their feasibility in a real world scenario, they are used for both example services referenced earlier

    Improving fault coverage and minimising the cost of fault identification when testing from finite state machines

    Get PDF
    Software needs to be adequately tested in order to increase the confidence that the system being developed is reliable. However, testing is a complicated and expensive process. Formal specification based models such as finite state machines have been widely used in system modelling and testing. In this PhD thesis, we primarily investigate fault detection and identification when testing from finite state machines. The research in this thesis is mainly comprised of three topics - construction of multiple Unique Input/Output (UIO) sequences using Metaheuristic Optimisation Techniques (MOTs), the improved fault coverage by using robust Unique Input/Output Circuit (UIOC) sequences, and fault diagnosis when testing from finite state machines. In the studies of the construction of UIOs, a model is proposed where a fitness function is defined to guide the search for input sequences that are potentially UIOs. In the studies of the improved fault coverage, a new type of UIOCs is defined. Based upon the Rural Chinese Postman Algorithm (RCPA), a new approach is proposed for the construction of more robust test sequences. In the studies of fault diagnosis, heuristics are defined that attempt to lead to failures being observed in some shorter test sequences, which helps to reduce the cost of fault isolation and identification. The proposed approaches and techniques were evaluated with regard to a set of case studies, which provides experimental evidence for their efficacy.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore