4,235 research outputs found

    A study of the selection of microcomputer architectures to automate planetary spacecraft power systems

    Get PDF
    Performance and reliability models of alternate microcomputer architectures as a methodology for optimizing system design were examined. A methodology for selecting an optimum microcomputer architecture for autonomous operation of planetary spacecraft power systems was developed. Various microcomputer system architectures are analyzed to determine their application to spacecraft power systems. It is suggested that no standardization formula or common set of guidelines exists which provides an optimum configuration for a given set of specifications

    Development and evaluation of a fault-tolerant multiprocessor (FTMP) computer. Volume 4: FTMP executive summary

    Get PDF
    The FTMP architecture is a high reliability computer concept modeled after a homogeneous multiprocessor architecture. Elements of the FTMP are operated in tight synchronism with one another and hardware fault-detection and fault-masking is provided which is transparent to the software. Operating system design and user software design is thus greatly simplified. Performance of the FTMP is also comparable to that of a simplex equivalent due to the efficiency of fault handling hardware. The FTMP project constructed an engineering module of the FTMP, programmed the machine and extensively tested the architecture through fault injection and other stress testing. This testing confirmed the soundness of the FTMP concepts

    Performance of turbo multi-user detectors in space-time coded DS-CDMA systems

    Get PDF
    Includes bibliographical references (leaves 118-123).In this thesis we address the problem of improving the uplink capacity and the performance of a DS-CDMA system by combining MUD and turbo decoding. These two are combined following the turbo principle. Depending on the concatenation scheme used, we divide these receivers into the Partitioned Approach (PA) and the Iterative Approach (IA) receivers. To enable the iterative exchange of information, these receivers employ a Parallel Interference Cancellation (PIC) detector as the first receiver stage

    The Failure Detector Abstraction

    Full text link
    This paper surveys the failure detector concept through two dimensions. First we study failure detectors as building blocks to simplify the design of reliable distributed algorithms. More specifically, we illustrate how failure detectors can factor out timing assumptions to detect failures in distributed agreement algorithms. Second, we study failure detectors as computability benchmarks. That is, we survey the weakest failure detector question and illustrate how failure detectors can be used to classify problems. We also highlights some limitations of the failure detector abstraction along each of the dimensions

    Divide-and-Conquer Distributed Learning: Privacy-Preserving Offloading of Neural Network Computations

    Get PDF
    Machine learning has become a highly utilized technology to perform decision making on high dimensional data. As dataset sizes have become increasingly large so too have the neural networks to learn the complex patterns hidden within. This expansion has continued to the degree that it may be infeasible to train a model from a singular device due to computational or memory limitations of underlying hardware. Purpose built computing clusters for training large models are commonplace while access to networks of heterogeneous devices is still typically more accessible. In addition, with the rise of 5G networks, computation at the edge becoming more commonplace, and inspired by the successes of the folding@home project utilizing crowdsourced computation, we consider the scenario of the crowdsourcing the computation required for training of a neural network particularly appealing. Distributed learning promises to bridge the widening gap between singular device performance and large-scale model computational requirements, but unfortunately, current distributed learning techniques do not maintain privacy of both the model and input with- out an accuracy or computational tradeoff. In response, we present Divide and Conquer Learning (DCL), an innovative approach that enables quantifiable privacy guarantees while offloading the computational burden of training to a network of devices. A user can divide the training computation of its neural network into neuron-sized computation tasks and dis- tribute them to devices based on their available resources. The results will be returned to the user and aggregated in an iterative process to obtain the final neural network model. To protect the privacy of the user’s data and model, shuffling is done to both the data and the neural network model before the computation task is distributed to devices. Our strict adherence to the order of operations allows a user to verify the correctness of performed computations through assigning a task to multiple devices and cross-validating their results. This can protect against network churns and detect faulty or misbehaving devices
    • …
    corecore