35,348 research outputs found

    Are Lock-Free Concurrent Algorithms Practically Wait-Free?

    Get PDF
    Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers prefer to treat concurrent code as if it were wait-free, guaranteeing that all operations always make progress. Unfortunately, designing wait-free algorithms is generally a very complex task, and the resulting algorithms are not always efficient. While obtaining efficient wait-free algorithms has been a long-time goal for the theory community, most non-blocking commercial code is only lock-free. This paper suggests a simple solution to this problem. We show that, for a large class of lock- free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free progress. Our main contribution is a new way of analyzing a general class of lock-free algorithms under a stochastic scheduler. Our analysis relates the individual performance of processes with the global performance of the system using Markov chain lifting between a complex per-process chain and a simpler system progress chain. We show that lock-free algorithms are not only wait-free with probability 1, but that in fact a general subset of lock-free algorithms can be closely bounded in terms of the average number of steps required until an operation completes. To the best of our knowledge, this is the first attempt to analyze progress conditions, typically stated in relation to a worst case adversary, in a stochastic model capturing their expected asymptotic behavior.Comment: 25 page

    Monatomic phase change memory

    Full text link
    Phase change memory has been developed into a mature technology capable of storing information in a fast and non-volatile way, with potential for neuromorphic computing applications. However, its future impact in electronics depends crucially on how the materials at the core of this technology adapt to the requirements arising from continued scaling towards higher device densities. A common strategy to finetune the properties of phase change memory materials, reaching reasonable thermal stability in optical data storage, relies on mixing precise amounts of different dopants, resulting often in quaternary or even more complicated compounds. Here we show how the simplest material imaginable, a single element (in this case, antimony), can become a valid alternative when confined in extremely small volumes. This compositional simplification eliminates problems related to unwanted deviations from the optimized stoichiometry in the switching volume, which become increasingly pressing when devices are aggressively miniaturized. Removing compositional optimization issues may allow one to capitalize on nanosize effects in information storage

    Automating embedded analysis capabilities and managing software complexity in multiphysics simulation part I: template-based generic programming

    Full text link
    An approach for incorporating embedded simulation and analysis capabilities in complex simulation codes through template-based generic programming is presented. This approach relies on templating and operator overloading within the C++ language to transform a given calculation into one that can compute a variety of additional quantities that are necessary for many state-of-the-art simulation and analysis algorithms. An approach for incorporating these ideas into complex simulation codes through general graph-based assembly is also presented. These ideas have been implemented within a set of packages in the Trilinos framework and are demonstrated on a simple problem from chemical engineering

    Correlation-powered Information Engines and the Thermodynamics of Self-Correction

    Full text link
    Information engines can use structured environments as a resource to generate work by randomizing ordered inputs and leveraging the increased Shannon entropy to transfer energy from a thermal reservoir to a work reservoir. We give a broadly applicable expression for the work production of an information engine, generally modeled as a memoryful channel that communicates inputs to outputs as it interacts with an evolving environment. The expression establishes that an information engine must have more than one memory state in order to leverage input environment correlations. To emphasize this functioning, we designed an information engine powered solely by temporal correlations and not by statistical biases, as employed by previous engines. Key to this is the engine's ability to synchronize---the engine automatically returns to a desired dynamical phase when thrown into an unwanted, dissipative phase by corruptions in the input---that is, by unanticipated environmental fluctuations. This self-correcting mechanism is robust up to a critical level of corruption, beyond which the system fails to act as an engine. We give explicit analytical expressions for both work and critical corruption level and summarize engine performance via a thermodynamic-function phase diagram over engine control parameters. The results reveal a new thermodynamic mechanism based on nonergodicity that underlies error correction as it operates to support resilient engineered and biological systems.Comment: 22 pages, 13 figures; http://csc.ucdavis.edu/~cmg/compmech/pubs/tos.ht

    Resilience to time-correlated noise in quantum computation

    Full text link
    Fault-tolerant quantum computation techniques rely on weakly correlated noise. Here I show that it is enough to assume weak spatial correlations: time correlations can take any form. In particular, single-shot error correction techniques exhibit a noise threshold for quantum memories under spatially local stochastic noise.Comment: 16 pages, v3: as accepted in journa

    Skyrmion Gas Manipulation for Probabilistic Computing

    Full text link
    The topologically protected magnetic spin configurations known as skyrmions offer promising applications due to their stability, mobility and localization. In this work, we emphasize how to leverage the thermally driven dynamics of an ensemble of such particles to perform computing tasks. We propose a device employing a skyrmion gas to reshuffle a random signal into an uncorrelated copy of itself. This is demonstrated by modelling the ensemble dynamics in a collective coordinate approach where skyrmion-skyrmion and skyrmion-boundary interactions are accounted for phenomenologically. Our numerical results are used to develop a proof-of-concept for an energy efficient (∼μW\sim\mu\mathrm{W}) device with a low area imprint (∼μm2\sim\mu\mathrm{m}^2). Whereas its immediate application to stochastic computing circuit designs will be made apparent, we argue that its basic functionality, reminiscent of an integrate-and-fire neuron, qualifies it as a novel bio-inspired building block.Comment: 41 pages, 20 figure
    • …
    corecore