282 research outputs found

    Stretching Homopolymers

    Full text link
    Force induced stretching of polymers is important in a variety of contexts. We have used theory and simulations to describe the response of homopolymers, with NN monomers, to force (ff) in good and poor solvents. In good solvents and for {{sufficiently large}} NN we show, in accord with scaling predictions, that the mean extension along the ff axis ∼f\sim f for small ff, and ∼f2/3\sim f^{{2/3}} (the Pincus regime) for intermediate values of ff. The theoretical predictions for \la Z\ra as a function of ff are in excellent agreement with simulations for N=100 and 1600. However, even with N=1600, the expected Pincus regime is not observed due to the the breakdown of the assumptions in the blob picture for finite NN. {{We predict the Pincus scaling in a good solvent will be observed for N≳105N\gtrsim 10^5}}. The force-dependent structure factors for a polymer in a poor solvent show that there are a hierarchy of structures, depending on the nature of the solvent. For a weakly hydrophobic polymer, various structures (ideal conformations, self-avoiding chains, globules, and rods) emerge on distinct length scales as ff is varied. A strongly hydrophobic polymer remains globular as long as ff is less than a critical value fcf_c. Above fcf_c, an abrupt first order transition to a rod-like structure occurs. Our predictions can be tested using single molecule experiments.Comment: 24 pages, 7 figure

    Depletion effects and loop formation in self-avoiding polymers

    Full text link
    Langevin dynamics is employed to study the looping kinetics of self-avoiding polymers both in ideal and crowded solutions. A rich kinetics results from the competition of two crowding-induced effects: the depletion attraction and the enhanced viscous friction. For short chains, the enhanced friction slows down looping, while, for longer chains, the depletion attraction renders it more frequent and persistent. We discuss the possible relevance of the findings for chromatin looping in living cells.Comment: 4 pages, 3 figure

    Generalizing DP-SGD with Shuffling and Batch Clipping

    Full text link
    Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach. We provide a general differential private algorithmic framework that goes beyond DP-SGD and allows any possible first order optimizers (e.g., classical SGD and momentum based SGD approaches) in combination with batch clipping, which clips an aggregate of computed gradients rather than summing clipped gradients (as is done in individual clipping). The framework also admits sampling techniques beyond random subsampling such as shuffling. Our DP analysis follows the ff-DP approach and introduces a new proof technique which allows us to derive simple closed form expressions and to also analyse group privacy. In particular, for EE epochs work and groups of size gg, we show a gE\sqrt{g E} DP dependency for batch clipping with shuffling.Comment: Update disclaimer

    Batch Clipping and Adaptive Layerwise Clipping for Differential Private Stochastic Gradient Descent

    Full text link
    Each round in Differential Private Stochastic Gradient Descent (DPSGD) transmits a sum of clipped gradients obfuscated with Gaussian noise to a central server which uses this to update a global model which often represents a deep neural network. Since the clipped gradients are computed separately, which we call Individual Clipping (IC), deep neural networks like resnet-18 cannot use Batch Normalization Layers (BNL) which is a crucial component in deep neural networks for achieving a high accuracy. To utilize BNL, we introduce Batch Clipping (BC) where, instead of clipping single gradients as in the orginal DPSGD, we average and clip batches of gradients. Moreover, the model entries of different layers have different sensitivities to the added Gaussian noise. Therefore, Adaptive Layerwise Clipping methods (ALC), where each layer has its own adaptively finetuned clipping constant, have been introduced and studied, but so far without rigorous DP proofs. In this paper, we propose {\em a new ALC and provide rigorous DP proofs for both BC and ALC}. Experiments show that our modified DPSGD with BC and ALC for CIFAR-1010 with resnet-1818 converges while DPSGD with IC and ALC does not.Comment: 20 pages, 18 Figure

    Hogwild! over Distributed Local Data Sets with Linearly Increasing Mini-Batch Sizes

    Get PDF
    Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where multiple threads in parallel access a common repository containing training data, perform SGD iterations and update shared state that represents a jointly learned (global) model. We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides. The results of these local SGD computations are aggregated by a central "aggregator" which mimics Hogwild!. We show how local compute nodes can start choosing small mini-batch sizes which increase to larger ones in order to reduce communication cost (round interaction with the aggregator). We improve state-of-the-art literature and show O(KO(\sqrt{K}) communication rounds for heterogeneous data for strongly convex problems, where KK is the total number of gradient computations across all local compute nodes. For our scheme, we prove a \textit{tight} and novel non-trivial convergence analysis for strongly convex problems for {\em heterogeneous} data which does not use the bounded gradient assumption as seen in many existing publications. The tightness is a consequence of our proofs for lower and upper bounds of the convergence rate, which show a constant factor difference. We show experimental results for plain convex and non-convex problems for biased (i.e., heterogeneous) and unbiased local data sets.Comment: arXiv admin note: substantial text overlap with arXiv:2007.09208 AISTATS 202

    Theory of biopolymer stretching at high forces

    Full text link
    We provide a unified theory for the high force elasticity of biopolymers solely in terms of the persistence length, ξp\xi_p, and the monomer spacing, aa. When the force f>\fh \sim k_BT\xi_p/a^2 the biopolymers behave as Freely Jointed Chains (FJCs) while in the range \fl \sim k_BT/\xi_p < f < \fh the Worm-like Chain (WLC) is a better model. We show that ξp\xi_p can be estimated from the force extension curve (FEC) at the extension x≈1/2x\approx 1/2 (normalized by the contour length of the biopolymer). After validating the theory using simulations, we provide a quantitative analysis of the FECs for a diverse set of biopolymers (dsDNA, ssRNA, ssDNA, polysaccharides, and unstructured PEVK domain of titin) for x≥1/2x \ge 1/2. The success of a specific polymer model (FJC or WLC) to describe the FEC of a given biopolymer is naturally explained by the theory. Only by probing the response of biopolymers over a wide range of forces can the ff-dependent elasticity be fully described.Comment: 20 pages, 4 figure

    Roots of the derivative of the Riemann zeta function and of characteristic polynomials

    Full text link
    We investigate the horizontal distribution of zeros of the derivative of the Riemann zeta function and compare this to the radial distribution of zeros of the derivative of the characteristic polynomial of a random unitary matrix. Both cases show a surprising bimodal distribution which has yet to be explained. We show by example that the bimodality is a general phenomenon. For the unitary matrix case we prove a conjecture of Mezzadri concerning the leading order behavior, and we show that the same follows from the random matrix conjectures for the zeros of the zeta function.Comment: 24 pages, 6 figure

    Generalizing DP-SGD with shuffling and batch clipping

    Get PDF
    Classical differential private DP-SGD implements individual clipping with random subsampling, which forces a mini-batch SGD approach. We provide a general differential private algorithmic framework that goes beyond DP-SGD and allows any possible first order optimizers (e.g., classical SGD and momentum based SGD approaches) in combination with batch clipping, which clips an aggregate of computed gradients rather than summing clipped gradients (as is done in individual clipping). The framework also admits sampling techniques beyond random subsampling such as shuffling. Our DP analysis follows the f -DP approach and introduces a new proof technique based on a slightly stronger adversarial model which allows us to derive simple closed form expressions and to also analyse group privacy. In particular, for E epochs work and groups of size g, we show a√gE DP dependency for batch clipping with shuffling
    • …
    corecore