10 research outputs found

    On the Average Distance of the Hypercube Tree

    Get PDF

    Integration of tools for the Design and Assessment of High-Performance, Highly Reliable Computing Systems (DAHPHRS), phase 1

    Get PDF
    Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified

    An empirical evaluation of techniques for parallel simulation of message passing networks

    Get PDF
    209 p.[EN]In the field of computer design, simulation is an essential tool to validate and evaluate architectural proposals. Conventional simulation techniques, designed for their use in sequential computers, are too slow if the system to simulate is large or complex. The aim of this work is to search for techniques to accelerate simulations exploiting the parallelism available in current, commercial multicomputers, and to use these techniques to study a model of a message router. This router has been designed to constitute the communication infrastructure of a (hypothetical) massively parallel computer. Three parallel simulation techniques have been considered: synchronous, asynchronous-conservative and asynchronous-optimistic. These algorithms have been implemented in three multicomputers: a transputer-based Supernode, an Intel Paragon and a network of workstations. The influence that factors such as the characteristics of the simulated models, the organization of the simulators and the characteristics of the target multicomputers have in the performance of the simulations has been measured and characterized. It is concluded that optimistic parallel simulation techniques are not suitable for the considered kind of models, although they may provide good performance in other environments. A network of workstations is not the right platform for our experiments, because the communication demands of the parallel simulators surpass the abilities of local area networks—the granularity is too fine. Synchronous and conservative parallel simulation techniques perform very well in the Supernode and in the Paragon, specially if the model to simulate is complex or large—precisely the worst case for traditional, sequential simulators. This way, studies previously considered as unrealizable, due to their exceedingly high computational cost, can be performed in reasonable times. Additionally, the spectrum of possibilities of using multicomputers can be broadened to execute more than numeric applications.[ES]En el ámbito del diseño de computadores, la simulación es una herramienta imprescindible para la validación y evaluación de cualquier propuesta arquitectónica. Las ténicas convencionales de simulación, diseñadas para su utilización en computadores secuenciales, son demasiado lentas si el sistema a simular es grande o complejo. El objetivo de esta tesis es buscar técnicas para acelerar estas simulaciones, aprovechando el paralelismo disponible en multicomputadores comerciales, y usar esas técnicas para el estudio de un modelo de encaminador de mensajes. Este encaminador está diseñado para formar infraestructura de comunicaciones de un hipotético computador masivamente paralelo. En este trabajo se consideran tres técnicas de simulación paralela: síncrona, asíncrona-conservadora y asíncrona-optimista. Estos algoritmos se han implementado en tres multicomputadores: un Supernode basado en Transputers, un Intel Paragon y una red de estaciones de trabajo. Se caracteriza la influencia que tienen en las prestaciones de los simuladores aspectos tales como los parámetros del modelo simulado, la organización del simulador y las características del multicomputador utilizado. Se concluye que las técnicas de simulación paralela optimista no resultan adecuadas para trabajar con el modelo considerado, aunque pueden ofrecer un buen rendimiento en otros entornos. La red de estaciones de trabajo no resulta una plataforma apropiada para estas simulaciones, ya que una red local no reúne condiciones para la ejecución de aplicaciones paralelas de grano fino. Las técnicas de simulación paralela síncrona y conservadora dan muy buenos resultados en el Supernode y en el Paragon, especialmente si el modelo a simular es complejo o grande—precisamente el peor caso para los algoritmos secuenciales. De esta forma, estudios previamente considerados inviables, por ser demasiado costosos computacionalmente, pueden realizarse en tiempos razonables. Además, se amplía el espectro de posibilidades de los multicomputadores, utilizándolos para algo más que aplicaciones numéricas.Este trabajo ha sido parcialmente subvencionado por la Comisión Interministerial de Ciencia y Tecnología, bajo contrato TIC95-037

    Distributed Self Fault Diagnosis in Wireless Sensor Networks using Statistical Methods

    Get PDF
    Wireless sensor networks (WSNs) are widely used in various real life applications where the sensor nodes are randomly deployed in hostile, human inaccessible and adversarial environments. One major research focus in wireless sensor networks in the past decades has been to diagnose the sensor nodes to identify their fault status. This helps to provide continuous service of the network despite the occurrence of failure due to environmental conditions. Some of the burning issues related to fault diagnosis in wireless sensor networks have been addressed in this thesis mainly focusing on improvement of diagnostic accuracy, reduction of communication overhead and latency, and robustness to erroneous data by using statistical methods. All the proposed algorithms are evaluated analytically and implemented in standard network simulator NS3 (version 3.19). A distributed self fault diagnosis algorithm using neighbor coordination (DSFDNC) is proposed to identify both hard and soft faulty sensor nodes in wireless sensor networks. The algorithm is distributed (runs in each sensor node), self diagnosable (each node identifies its fault status) and can diagnose the most common faults like stuck at zero, stuck at one, random data and hard faults. In this algorithm, each sensor node gathered the observed data from the neighbors and computes the mean to check the presence of faulty sensor node. If a node diagnoses a faulty sensor node in the neighbors, then it compares observed data with the data of the neighbors and predicts its probable fault status. The final fault status is determined by diffusing the fault information obtained from the neighbors. The accuracy and completeness of the algorithm are verified based on the statistical analysis over sensors data. The performance parameters such as diagnosis accuracy, false alarm rate, false positive rate, total number of message exchanges, energy consumption, network life time, and diagnosis latency of the DSFDNC algorithm are determined for different fault probabilities and average degrees and compared with existing distributed fault diagnosis algorithms. To enhance the diagnosis accuracy, another self fault diagnosis algorithm is proposed based on hypothesis testing (DSFDHT) using the neighbor coordination approach. The Newman-Pearson hypothesis test is used to diagnose the soft fault status of each sensor node along with the neighbors. The algorithm can diagnose the faulty sensor node when the average degree of the network is less. The diagnosis accuracy, false alarm rate and false positive rate performance of the DSFDHT algorithm are improved over DSFDNC for sparse wireless sensor networks by keeping other performance parameters nearly same. The classical methods for fault finding using mean, median, majority voting and hypothesis testing are not suitable for large scale wireless sensor networks due to large devi- ation in transmitted data by faulty sensor nodes. Therefore, a modified three sigma edit test based self fault diagnosis algorithm (DSFD3SET) is proposed which diagnoses in an efficient manner over a large scale wireless sensor networks. The diagnosis accuracy, false alarm rate, and false positive rate of the proposed algorithm improve as compared to that of the DSFDNC and DSFDHT algorithms. The algorithm enhances the total number of message exchanges, energy consumption, network life time, and diagnosis latency, because the proposed algorithm needs less number of message exchanges over the algorithms such as DSFDNC and DSFDHT. In the DSFDNC, DSFDHT and DSFD3SET algorithms, the faulty sensor nodes are considered as soft faulty nodes which behave permanently. However in wireless sensor networks, the sensor nodes behave either fault free or faulty during different periods of time and are considered as intermittent faulty sensor nodes. Diagnosing intermittent faulty sensor nodes in wireless sensor networks is a challenging problem, because of inconsistent result patterns generated by the sensor nodes. The traditional distributed fault diagnosis (DIFD) algorithms consume more message exchanges to obtain the global fault status of the network. To optimize the number of message exchanges over the network, a self fault diagnosis algorithm is proposed here, which repeatedly conducts the self fault diagnosis procedure based on the modified three sigma edit test over a duration to identify the intermittent faulty sensor nodes. The algorithm needs less number of iterations to identify the intermittent faulty sensor nodes. The simulation results show that, the performance of the HISFD3SET algorithm improves in diagnosis accuracy, false alarm rate and false positive rate over the DIFD algorith

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Efficient Passive Clustering and Gateways selection MANETs

    Get PDF
    Passive clustering does not employ control packets to collect topological information in ad hoc networks. In our proposal, we avoid making frequent changes in cluster architecture due to repeated election and re-election of cluster heads and gateways. Our primary objective has been to make Passive Clustering more practical by employing optimal number of gateways and reduce the number of rebroadcast packets

    Space Communications: Theory and Applications. Volume 3: Information Processing and Advanced Techniques. A Bibliography, 1958 - 1963

    Get PDF
    Annotated bibliography on information processing and advanced communication techniques - theory and applications of space communication
    corecore