960 research outputs found

    Fast Deterministic Consensus in a Noisy Environment

    Full text link
    It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm (PODC, 1996, pp. 166-175) solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.Comment: Typographical errors fixe

    Proceedings of the real-time database workshop, Eindhoven, 23 February 1995

    Get PDF

    Engineering Resilient Collective Adaptive Systems by Self-Stabilisation

    Get PDF
    Collective adaptive systems are an emerging class of networked computational systems, particularly suited in application domains such as smart cities, complex sensor networks, and the Internet of Things. These systems tend to feature large scale, heterogeneity of communication model (including opportunistic peer-to-peer wireless interaction), and require inherent self-adaptiveness properties to address unforeseen changes in operating conditions. In this context, it is extremely difficult (if not seemingly intractable) to engineer reusable pieces of distributed behaviour so as to make them provably correct and smoothly composable. Building on the field calculus, a computational model (and associated toolchain) capturing the notion of aggregate network-level computation, we address this problem with an engineering methodology coupling formal theory and computer simulation. On the one hand, functional properties are addressed by identifying the largest-to-date field calculus fragment generating self-stabilising behaviour, guaranteed to eventually attain a correct and stable final state despite any transient perturbation in state or topology, and including highly reusable building blocks for information spreading, aggregation, and time evolution. On the other hand, dynamical properties are addressed by simulation, empirically evaluating the different performances that can be obtained by switching between implementations of building blocks with provably equivalent functional properties. Overall, our methodology sheds light on how to identify core building blocks of collective behaviour, and how to select implementations that improve system performance while leaving overall system function and resiliency properties unchanged.Comment: To appear on ACM Transactions on Modeling and Computer Simulatio

    Parallel Natural Language Parsing: From Analysis to Speedup

    Get PDF
    Electrical Engineering, Mathematics and Computer Scienc

    Reported worker characteristics for entry-level employees in information processing.

    Get PDF
    The population consisted of the major employers (those with five hundred or more employees) in the Oklahoma City Metropolitan Area as listed in the "Statistical Abstract of Oklahoma 1980." The study was limited to the companies which were computer-based within the State of Oklahoma. Thirty-two companies participated in the project.Analyzed and evaluated in this study were worker characteristics and skills deemed necessary by managers in business and industry to secure employment and to succeed in the area of Information Processing.Statistics were compiled on one hundred and sixty items taken from the interview-questionnaire. A Condescriptive Program computed the statistical mean on a five-point scale as to the importance of each item and percentile responses were illustrated through the use of a Crosstabs Program. For additional background information concerning the population, five items relating to demographic factors of the companies were tabulated.An interview-questionnaire was developed and was submitted to a panel of experts for validation. The experts were members of the Business Advisory Committee for the Business and Economics Department at the University of Science and Arts of Oklahoma. For validation by business and industry, five companies were selected from the population and personal interviews were conducted with those companies

    The Crypto-democracy and the Trustworthy

    Full text link
    In the current architecture of the Internet, there is a strong asymmetry in terms of power between the entities that gather and process personal data (e.g., major Internet companies, telecom operators, cloud providers, ...) and the individuals from which this personal data is issued. In particular, individuals have no choice but to blindly trust that these entities will respect their privacy and protect their personal data. In this position paper, we address this issue by proposing an utopian crypto-democracy model based on existing scientific achievements from the field of cryptography. More precisely, our main objective is to show that cryptographic primitives, including in particular secure multiparty computation, offer a practical solution to protect privacy while minimizing the trust assumptions. In the crypto-democracy envisioned, individuals do not have to trust a single physical entity with their personal data but rather their data is distributed among several institutions. Together these institutions form a virtual entity called the Trustworthy that is responsible for the storage of this data but which can also compute on it (provided first that all the institutions agree on this). Finally, we also propose a realistic proof-of-concept of the Trustworthy, in which the roles of institutions are played by universities. This proof-of-concept would have an important impact in demonstrating the possibilities offered by the crypto-democracy paradigm.Comment: DPM 201

    Transputer Implementation for the Shell Model and Sd Shell Calculations

    Get PDF
    This thesis consists of two parts. The first part discusses a new Shell model implementation based on communicating sequential processes. The second part contains different shell model calculations, which have been done using an earlier implementation. Sequential processing computers appear to be fast reaching their upper limits of efficiency. Presently they can perform one machine operation in every clock cycle and the silicon technology also seems to have reached its physical limits of miniaturization. Hence new software/hardware approaches should be investigated in order to meet growing computational requirements. Parallel processing has been demonstrated to be one alternative to achieve this objective. But the major problem with this approach is that many algorithms used for the solution of physical problems are not suitable for distribution over a number of processors. In part one of this work we have identified this concurrency in the shell model calculations and implemented it on the Meiko Computing Surface. Firstly we have explained the motivation for this project and then give a detailed comparison of different hardware/software that has been available to us and reasons for our preferred choice. Similarly, we also outline the advantages/disadvantages of the available parallel/sequential languages before choosing parallel C to be our language of implementation. We describe our new serial implementation DASS, the Dynamic And Structured Shell model, which forms basis for the parallel version. We have developed a new algorithm for the phase calculation of Slater Determinants, which is, superior to the previously used occupancy representation method. Both our serial and parallel implementations have adopted this representation. The PARALLEL GLASNAST, as we call it, PARALLEL GLASgow Nuclear Algorithmic Technique, is our complete implementation of the inherent parallelism in Shell model calculation and has been described in detail. It is actually based on splitting the whole calculation into three tasks, which can be distributed on the number of processors required by the chosen topology, and executed concurrently. We also give a detailed discussion of the communication/ synchronization protocols which preserve the available concurrency. We have achieved a complete overlap of the the main tasks, one responsible for arithmetically intensive operations and the other doing searching among, possibly, millions of states. It demonstrates that the implementation of these tasks has got enough built in flexibility that they could be run on any number of processors. Execution times for one and three transputers have been obtained for 28Si, which are fairly good. We have also undertaken a detailed analysis of how the amount of communication (traffic) between processors changes with the increase in the number of states. Part two describes shell model calculations for mass 21 nuclei. Previous many calculations have not taken into account the Coulomb's interaction, which is responsible for differences between mirror nuclei. They also do not use the valuable information on nucleon occupancies. We have made extensive calculations for the six isobars in mass 21 using CWC, PW and USD interactions. The results obtained in this case include, energy, spin, isospin and electromagnetic transition rates. These result are discussed and conclusions drawn. We concentrate on the comparison of the properties in of each mirror pairs. This comparison is supplemented by tables, energy level diagrams and occupancy diagrams. As we consider mirror pair individually, the mixing of states, which is caused by the short range nuclear force and the Coulomb force, becomes more evident. The other important thing we have noticed is, that some pairs of states swap their places, between a mirror pair, on the occupancy diagram, suggesting that their wave functions might have been swapped. We have undertaken a detailed study to discover any swapping states. The tests applied to confirm this include comparison of energy, electromagnetic properties and the occupancy information obtained with different interactions. We find that only the 91, 92 states in Al have swapped over. We also report some real energy gaps which exist on the basis of our calculations for Al

    Application Agreement and Integration Services

    Get PDF
    Application agreement and integration services are required by distributed, fault-tolerant, safety critical systems to assure required performance. An analysis of distributed and hierarchical agreement strategies are developed against the backdrop of observed agreement failures in fielded systems. The documented work was performed under NASA Task Order NNL10AB32T, Validation And Verification of Safety-Critical Integrated Distributed Systems Area 2. This document is intended to satisfy the requirements for deliverable 5.2.11 under Task 4.2.2.3. This report discusses the challenges of maintaining application agreement and integration services. A literature search is presented that documents previous work in the area of replica determinism. Sources of non-deterministic behavior are identified and examples are presented where system level agreement failed to be achieved. We then explore how TTEthernet services can be extended to supply some interesting application agreement frameworks. This document assumes that the reader is familiar with the TTEthernet protocol. The reader is advised to read the TTEthernet protocol standard [1] before reading this document. This document does not re-iterate the content of the standard
    corecore