367 research outputs found

    Nonconventional Large Deviations Theorems

    Full text link
    We obtain large deviations theorems for nonconventional sums with underlying process being a Markov process satisfying the Doeblin condition or a dynamical system such as subshift of finite type or hyperbolic or expanding transformation

    Resonances and Twist in Volume-Preserving Mappings

    Full text link
    The phase space of an integrable, volume-preserving map with one action and dd angles is foliated by a one-parameter family of dd-dimensional invariant tori. Perturbations of such a system may lead to chaotic dynamics and transport. We show that near a rank-one, resonant torus these mappings can be reduced to volume-preserving "standard maps." These have twist only when the image of the frequency map crosses the resonance curve transversely. We show that these maps can be approximated---using averaging theory---by the usual area-preserving twist or nontwist standard maps. The twist condition appropriate for the volume-preserving setting is shown to be distinct from the nondegeneracy condition used in (volume-preserving) KAM theory.Comment: Many typos fixed and notation simplified. New nthn^{th} order averaging theorem and volume-preserving variant. Numerical comparison with averaging adde

    Differentially Private Distributed Optimization

    Full text link
    In distributed optimization and iterative consensus literature, a standard problem is for NN agents to minimize a function ff over a subset of Euclidean space, where the cost function is expressed as a sum fi\sum f_i. In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual's cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm. We observe that to achieve ϵ\epsilon-differential privacy the accuracy of the algorithm has the order of O(1ϵ2)O(\frac{1}{\epsilon^2})

    Private Multiplicative Weights Beyond Linear Queries

    Full text link
    A wide variety of fundamental data analyses in machine learning, such as linear and logistic regression, require minimizing a convex function defined by the data. Since the data may contain sensitive information about individuals, and these analyses can leak that sensitive information, it is important to be able to solve convex minimization in a privacy-preserving way. A series of recent results show how to accurately solve a single convex minimization problem in a differentially private manner. However, the same data is often analyzed repeatedly, and little is known about solving multiple convex minimization problems with differential privacy. For simpler data analyses, such as linear queries, there are remarkable differentially private algorithms such as the private multiplicative weights mechanism (Hardt and Rothblum, FOCS 2010) that accurately answer exponentially many distinct queries. In this work, we extend these results to the case of convex minimization and show how to give accurate and differentially private solutions to *exponentially many* convex minimization problems on a sensitive dataset

    Infinitely Many Stochastically Stable Attractors

    Full text link
    Let f be a diffeomorphism of a compact finite dimensional boundaryless manifold M exhibiting infinitely many coexisting attractors. Assume that each attractor supports a stochastically stable probability measure and that the union of the basins of attraction of each attractor covers Lebesgue almost all points of M. We prove that the time averages of almost all orbits under random perturbations are given by a finite number of probability measures. Moreover these probability measures are close to the probability measures supported by the attractors when the perturbations are close to the original map f.Comment: 14 pages, 2 figure

    Fast-slow partially hyperbolic systems versus Freidlin-Wentzell random systems

    Full text link
    We consider a simple class of fast-slow partially hyperbolic dynamical systems and show that the (properly rescaled) behaviour of the slow variable is very close to a Friedlin--Wentzell type random system for times that are rather long, but much shorter than the metastability scale. Also, we show the possibility of a "sink" with all the Lyapunov exponents positive, a phenomenon that turns out to be related to the lack of absolutely continuity of the central foliation.Comment: To appear in Journal of Statistical Physic

    Stochastic stability at the boundary of expanding maps

    Full text link
    We consider endomorphisms of a compact manifold which are expanding except for a finite number of points and prove the existence and uniqueness of a physical measure and its stochastical stability. We also characterize the zero-noise limit measures for a model of the intermittent map and obtain stochastic stability for some values of the parameter. The physical measures are obtained as zero-noise limits which are shown to satisfy Pesin?s Entropy Formula

    Detecting Change in Data Streams

    Get PDF

    Using a physics-informed neural network and fault zone acoustic monitoring to predict lab earthquakes

    Get PDF
    Predicting failure in solids has broad applications including earthquake prediction which remains an unattainable goal. However, recent machine learning work shows that laboratory earthquakes can be predicted using micro-failure events and temporal evolution of fault zone elastic properties. Remarkably, these results come from purely data-driven models trained with large datasets. Such data are equivalent to centuries of fault motion rendering application to tectonic faulting unclear. In addition, the underlying physics of such predictions is poorly understood. Here, we address scalability using a novel Physics-Informed Neural Network (PINN). Our model encodes fault physics in the deep learning loss function using time-lapse ultrasonic data. PINN models outperform data-driven models and significantly improve transfer learning for small training datasets and conditions outside those used in training. Our work suggests that PINN offers a promising path for machine learning-based failure prediction and, ultimately for improving our understanding of earthquake physics and prediction
    corecore