15,995 research outputs found

    Are Lock-Free Concurrent Algorithms Practically Wait-Free?

    Get PDF
    Lock-free concurrent algorithms guarantee that some concurrent operation will always make progress in a finite number of steps. Yet programmers prefer to treat concurrent code as if it were wait-free, guaranteeing that all operations always make progress. Unfortunately, designing wait-free algorithms is generally a very complex task, and the resulting algorithms are not always efficient. While obtaining efficient wait-free algorithms has been a long-time goal for the theory community, most non-blocking commercial code is only lock-free. This paper suggests a simple solution to this problem. We show that, for a large class of lock- free algorithms, under scheduling conditions which approximate those found in commercial hardware architectures, lock-free algorithms behave as if they are wait-free. In other words, programmers can keep on designing simple lock-free algorithms instead of complex wait-free ones, and in practice, they will get wait-free progress. Our main contribution is a new way of analyzing a general class of lock-free algorithms under a stochastic scheduler. Our analysis relates the individual performance of processes with the global performance of the system using Markov chain lifting between a complex per-process chain and a simpler system progress chain. We show that lock-free algorithms are not only wait-free with probability 1, but that in fact a general subset of lock-free algorithms can be closely bounded in terms of the average number of steps required until an operation completes. To the best of our knowledge, this is the first attempt to analyze progress conditions, typically stated in relation to a worst case adversary, in a stochastic model capturing their expected asymptotic behavior.Comment: 25 page

    Asynchronous Optimization Methods for Efficient Training of Deep Neural Networks with Guarantees

    Full text link
    Asynchronous distributed algorithms are a popular way to reduce synchronization costs in large-scale optimization, and in particular for neural network training. However, for nonsmooth and nonconvex objectives, few convergence guarantees exist beyond cases where closed-form proximal operator solutions are available. As most popular contemporary deep neural networks lead to nonsmooth and nonconvex objectives, there is now a pressing need for such convergence guarantees. In this paper, we analyze for the first time the convergence of stochastic asynchronous optimization for this general class of objectives. In particular, we focus on stochastic subgradient methods allowing for block variable partitioning, where the shared-memory-based model is asynchronously updated by concurrent processes. To this end, we first introduce a probabilistic model which captures key features of real asynchronous scheduling between concurrent processes; under this model, we establish convergence with probability one to an invariant set for stochastic subgradient methods with momentum. From the practical perspective, one issue with the family of methods we consider is that it is not efficiently supported by machine learning frameworks, as they mostly focus on distributed data-parallel strategies. To address this, we propose a new implementation strategy for shared-memory based training of deep neural networks, whereby concurrent parameter servers are utilized to train a partitioned but shared model in single- and multi-GPU settings. Based on this implementation, we achieve on average 1.2x speed-up in comparison to state-of-the-art training methods for popular image classification tasks without compromising accuracy

    Noise reduction in coarse bifurcation analysis of stochastic agent-based models: an example of consumer lock-in

    Get PDF
    We investigate coarse equilibrium states of a fine-scale, stochastic agent-based model of consumer lock-in in a duopolistic market. In the model, agents decide on their next purchase based on a combination of their personal preference and their neighbours' opinions. For agents with independent identically-distributed parameters and all-to-all coupling, we derive an analytic approximate coarse evolution-map for the expected average purchase. We then study the emergence of coarse fronts when spatial segregation is present in the relative perceived quality of products. We develop a novel Newton-Krylov method that is able to compute accurately and efficiently coarse fixed points when the underlying fine-scale dynamics is stochastic. The main novelty of the algorithm is in the elimination of the noise that is generated when estimating Jacobian-vector products using time-integration of perturbed initial conditions. We present numerical results that demonstrate the convergence properties of the numerical method, and use the method to show that macroscopic fronts in this model destabilise at a coarse symmetry-breaking bifurcation.Comment: This version of the manuscript was accepted for publication on SIAD
    • …
    corecore