537,262 research outputs found

    Neural networks and MIMD-multiprocessors

    Get PDF
    Two artificial neural network models are compared. They are the Hopfield Neural Network Model and the Sparse Distributed Memory model. Distributed algorithms for both of them are designed and implemented. The run time characteristics of the algorithms are analyzed theoretically and tested in practice. The storage capacities of the networks are compared. Implementations are done using a distributed multiprocessor system

    High-Performance Distributed ML at Scale through Parameter Server Consistency Models

    Full text link
    As Machine Learning (ML) applications increase in data size and model complexity, practitioners turn to distributed clusters to satisfy the increased computational and memory demands. Unfortunately, effective use of clusters for ML requires considerable expertise in writing distributed code, while highly-abstracted frameworks like Hadoop have not, in practice, approached the performance seen in specialized ML implementations. The recent Parameter Server (PS) paradigm is a middle ground between these extremes, allowing easy conversion of single-machine parallel ML applications into distributed ones, while maintaining high throughput through relaxed "consistency models" that allow inconsistent parameter reads. However, due to insufficient theoretical study, it is not clear which of these consistency models can really ensure correct ML algorithm output; at the same time, there remain many theoretically-motivated but undiscovered opportunities to maximize computational throughput. Motivated by this challenge, we study both the theoretical guarantees and empirical behavior of iterative-convergent ML algorithms in existing PS consistency models. We then use the gleaned insights to improve a consistency model using an "eager" PS communication mechanism, and implement it as a new PS system that enables ML algorithms to reach their solution more quickly.Comment: 19 pages, 2 figure

    Event Recognition Using Signal Spectrograms in Long Pulse Experiments

    Get PDF
    As discharge duration increases, real-time complex analysis of the signal becomes more important. In this context, data acquisition and processing systems must provide models for designing experiments which use event oriented plasma control. One example of advanced data analysis is signal classification. The off-line statistical analysis of a large number of discharges provides information to develop algorithms for the determination of the plasma parameters from measurements of magnetohydrodinamic waves, for example, to detect density fluctuations induced by the AlfvƩn cascades using morphological patterns. The need to apply different algorithms to the signals and to address different processing algorithms using the previous results necessitates the use of an event-based experiment. The Intelligent Test and Measurement System platform is an example of architecture designed to implement distributed data acquisition and real-time processing systems. The processing algorithm sequence is modeled using an event-based paradigm. The adaptive capacity of this model is based on the logic defined by the use of state machines in SCXML. The Intelligent Test and Measurement System platform mixes a local multiprocessing model with a distributed deployment of services based on Jini

    A Prescription for Partial Synchrony

    Get PDF
    Algorithms in message-passing distributed systems often require partial synchrony to tolerate crash failures. Informally, partial synchrony refers to systems where timing bounds on communication and computation may exist, but the knowledge of such bounds is limited. Traditionally, the foundation for the theory of partial synchrony has been real time: a time base measured by counting events external to the system, like the vibrations of Cesium atoms or piezoelectric crystals. Unfortunately, algorithms that are correct relative to many real-time based models of partial synchrony may not behave correctly in empirical distributed systems. For example, a set of popular theoretical models, which we call M_*, assume (eventual) upper bounds on message delay and relative process speeds, regardless of message size and absolute process speeds. Empirical systems with bounded channel capacity and bandwidth cannot realize such assumptions either natively, or through algorithmic constructions. Consequently, empirical deployment of the many M_*-based algorithms risks anomalous behavior. As a result, we argue that real time is the wrong basis for such a theory. Instead, the appropriate foundation for partial synchrony is fairness: a time base measured by counting events internal to the system, like the steps executed by the processes. By way of example, we redefine M_* models with fairness-based bounds and provide algorithmic techniques to implement fairness-based M_* models on a significant subset of the empirical systems. The proposed techniques use failure detectors ā€” system services that provide hints about process crashes ā€” as intermediaries that preserve the fairness constraints native to empirical systems. In effect, algorithms that are correct in M_* models are now proved correct in such empirical systems as well. Demonstrating our results requires solving three open problems. (1) We propose the first unified mathematical framework based on Timed I/O Automata to specify empirical systems, partially synchronous systems, and algorithms that execute within the aforementioned systems. (2) We show that crash tolerance capabilities of popular distributed systems can be denominated exclusively through fairness constraints. (3) We specify exemplar system models that identify the set of weakest system models to implement popular failure detectors

    Distributed Markovian Bisimulation Reduction aimed at CSL Model Checking

    Get PDF
    The verification of quantitative aspects like performance and dependability by means of model checking has become an important and vivid area of research over the past decade.\ud \ud An important result of that research is the logic CSL (continuous stochastic logic) and its corresponding model checking algorithms. The evaluation of properties expressed in CSL makes it necessary to solve large systems of linear (differential) equations, usually by means of numerical analysis. Both the inherent time and space complexity of the numerical algorithms make it practically infeasible to model check systems with more than 100 million states, whereas realistic system models may have billions of states.\ud \ud To overcome this severe restriction, it is important to be able to replace the original state space with a probabilistically equivalent, but smaller one. The most prominent equivalence relation is bisimulation, for which also a stochastic variant exists (Markovian bisimulation). In many cases, this bisimulation allows for a substantial reduction of the state space size. But, these savings in space come at the cost of an increased time complexity. Therefore in this paper a new distributed signature-based algorithm for the computation of the bisimulation quotient of a given state space is introduced.\ud \ud To demonstrate the feasibility of our approach in both a sequential, and more important, in a distributed setting, we have performed a number of case studies

    Advances in parameter estimation techniques applied to flexible structures

    Get PDF
    In this work, various parameter estimation techniques are investigated in the context of structural system identification utilizing distributed parameter models and 'measured' time-domain data. Distributed parameter models are formulated using the PDEMOD software developed by Taylor. Enhancements made to PDEMOD for this work include the following: (1) a Wittrick-Williams based root solving algorithm; (2) a time simulation capability; and (3) various parameter estimation algorithms. The parameter estimations schemes will be contrasted using the NASA Mini-Mast as the focus structure
    • ā€¦
    corecore