6,438 research outputs found

    EFFICIENT PROBE STATION PLACEMENT AND PROBE SET SELECTION FOR FAULT LOCALIZATION

    Get PDF
    Network fault management has been a focus of research activity with more emphasis on fault localization – zero down exact source of a failure from set of observed failures. Fault diagnosis is a central aspect of network fault management. Since faults are unavoidable in communication systems, their quick detection and isolation is essential for the robustness, reliability, and accessibility of a system. Probing technique for fault localization involves placement of probe stations (Probe stations are specially instrumented nodes from where probes can be sent to monitor the network) which affects the diagnosis capability of the probes sent by the probe stations. Probe station locations affect probing efficiency, monitoring capability, and deployment cost. We present probe station selection algorithms and aim to minimize the number of probe stations and make the monitoring robust against failures in a deterministic as well as a non-deterministic environment. We then implement algorithms that exploit interactions between probe paths to find a small collection of probes that can be used to locate faults. Small probe sets are desirable in order to minimize the costs imposed by probing, such as additional network load and data management requirements. We discuss a novel integrated approach of probe station and probe set selection for fault localization. A better placing of probe stations would produce fewer probes and probe set maintaining same diagnostic power. We provide experimental evaluation of the proposed algorithms through simulation results

    Simple Random Sampling-Based Probe Station Selection for Fault Detection in Wireless Sensor Networks

    Get PDF
    Fault detection for wireless sensor networks (WSNs) has been studied intensively in recent years. Most existing works statically choose the manager nodes as probe stations and probe the network at a fixed frequency. This straightforward solution leads however to several deficiencies. Firstly, by only assigning the fault detection task to the manager node the whole network is out of balance, and this quickly overloads the already heavily burdened manager node, which in turn ultimately shortens the lifetime of the whole network. Secondly, probing with a fixed frequency often generates too much useless network traffic, which results in a waste of the limited network energy. Thirdly, the traditional algorithm for choosing a probing node is too complicated to be used in energy-critical wireless sensor networks. In this paper, we study the distribution characters of the fault nodes in wireless sensor networks, validate the Pareto principle that a small number of clusters contain most of the faults. We then present a Simple Random Sampling-based algorithm to dynamic choose sensor nodes as probe stations. A dynamic adjusting rule for probing frequency is also proposed to reduce the number of useless probing packets. The simulation experiments demonstrate that the algorithm and adjusting rule we present can effectively prolong the lifetime of a wireless sensor network without decreasing the fault detected rate

    Efficient Probing Techniques for Fault Diagnosis

    Get PDF
    Abstract — Increase in the network usage and the widespread application of networks for more and more performance critical applications has caused a demand for tools that can monitor network health with minimum management traffic. Adaptive probing holds a potential to provide effective tools for end-toend monitoring and fault diagnosis over a network. In this paper we present adaptive probing tools that meet the requirements to provide an effective and efficient solution for fault diagnosis. In this paper, we propose adaptive probing based algorithms to perform fault localization by adapting the probe set to localize the faults in the network. We compare the performance and efficiency of the proposed algorithms through simulation results

    Advanced flight control system study

    Get PDF
    The architecture, requirements, and system elements of an ultrareliable, advanced flight control system are described. The basic criteria are functional reliability of 10 to the minus 10 power/hour of flight and only 6 month scheduled maintenance. A distributed system architecture is described, including a multiplexed communication system, reliable bus controller, the use of skewed sensor arrays, and actuator interfaces. Test bed and flight evaluation program are proposed

    Design and implementation for automated network troubleshooting using data mining

    Get PDF
    The efficient and effective monitoring of mobile networks is vital given the number of users who rely on such networks and the importance of those networks. The purpose of this paper is to present a monitoring scheme for mobile networks based on the use of rules and decision tree data mining classifiers to upgrade fault detection and handling. The goal is to have optimisation rules that improve anomaly detection. In addition, a monitoring scheme that relies on Bayesian classifiers was also implemented for the purpose of fault isolation and localisation. The data mining techniques described in this paper are intended to allow a system to be trained to actually learn network fault rules. The results of the tests that were conducted allowed for the conclusion that the rules were highly effective to improve network troubleshooting.Comment: 19 pages, 7 figures, International Journal of Data Mining & Knowledge Management Process (IJDKP) Vol.5, No.3, May 201

    A survey of outlier detection methodologies

    Get PDF
    Outlier detection has been used for centuries to detect and, where appropriate, remove anomalous observations from data. Outliers arise due to mechanical faults, changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. Their detection can identify system faults and fraud before they escalate with potentially catastrophic consequences. It can identify errors and remove their contaminating effect on the data set and as such to purify the data for processing. The original outlier detection methods were arbitrary but now, principled and systematic techniques are used, drawn from the full gamut of Computer Science and Statistics. In this paper, we introduce a survey of contemporary techniques for outlier detection. We identify their respective motivations and distinguish their advantages and disadvantages in a comparative review

    NASA SBIR abstracts of 1990 phase 1 projects

    Get PDF
    The research objectives of the 280 projects placed under contract in the National Aeronautics and Space Administration (NASA) 1990 Small Business Innovation Research (SBIR) Phase 1 program are described. The basic document consists of edited, non-proprietary abstracts of the winning proposals submitted by small businesses in response to NASA's 1990 SBIR Phase 1 Program Solicitation. The abstracts are presented under the 15 technical topics within which Phase 1 proposals were solicited. Each project was assigned a sequential identifying number from 001 to 280, in order of its appearance in the body of the report. The document also includes Appendixes to provide additional information about the SBIR program and permit cross-reference in the 1990 Phase 1 projects by company name, location by state, principal investigator, NASA field center responsible for management of each project, and NASA contract number

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control
    corecore