840 research outputs found

    On the Convergence of Alternating Direction Lagrangian Methods for Nonconvex Structured Optimization Problems

    Full text link
    Nonconvex and structured optimization problems arise in many engineering applications that demand scalable and distributed solution methods. The study of the convergence properties of these methods is in general difficult due to the nonconvexity of the problem. In this paper, two distributed solution methods that combine the fast convergence properties of augmented Lagrangian-based methods with the separability properties of alternating optimization are investigated. The first method is adapted from the classic quadratic penalty function method and is called the Alternating Direction Penalty Method (ADPM). Unlike the original quadratic penalty function method, in which single-step optimizations are adopted, ADPM uses an alternating optimization, which in turn makes it scalable. The second method is the well-known Alternating Direction Method of Multipliers (ADMM). It is shown that ADPM for nonconvex problems asymptotically converges to a primal feasible point under mild conditions and an additional condition ensuring that it asymptotically reaches the standard first order necessary conditions for local optimality are introduced. In the case of the ADMM, novel sufficient conditions under which the algorithm asymptotically reaches the standard first order necessary conditions are established. Based on this, complete convergence of ADMM for a class of low dimensional problems are characterized. Finally, the results are illustrated by applying ADPM and ADMM to a nonconvex localization problem in wireless sensor networks.Comment: 13 pages, 6 figure

    Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

    Full text link
    In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in deep neural networks. In addition, our mechanism intentionally adds "more noise" into features which are "less relevant" to the model output, and vice-versa. Our theoretical analysis further derives the sensitivities and error bounds of our mechanism. Rigorous experiments conducted on MNIST and CIFAR-10 datasets show that our mechanism is highly effective and outperforms existing solutions.Comment: IEEE ICDM 2017 - regular pape

    Note on a diffraction-amplification problem

    Full text link
    We investigate the solution of the equation \partial_t E(x,t)-iD\partial_x^2 E(x,t)= \lambda |S(x,t)|^2 E(x,t)$, for x in a circle and S(x,t) a Gaussian stochastic field with a covariance of a particular form. It is shown that the coupling \lambda_c at which diverges for t>=1 (in suitable units), is always less or equal for D>0 than D=0.Comment: REVTeX file, 8 pages, submitted to Journal of Physics

    Recent developments in econometric modeling and forecasting

    Get PDF
    2004-2005 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe

    Specialized motor-driven dusp1 expression in the song systems of multiple lineages of vocal learning birds.

    Get PDF
    Mechanisms for the evolution of convergent behavioral traits are largely unknown. Vocal learning is one such trait that evolved multiple times and is necessary in humans for the acquisition of spoken language. Among birds, vocal learning is evolved in songbirds, parrots, and hummingbirds. Each time similar forebrain song nuclei specialized for vocal learning and production have evolved. This finding led to the hypothesis that the behavioral and neuroanatomical convergences for vocal learning could be associated with molecular convergence. We previously found that the neural activity-induced gene dual specificity phosphatase 1 (dusp1) was up-regulated in non-vocal circuits, specifically in sensory-input neurons of the thalamus and telencephalon; however, dusp1 was not up-regulated in higher order sensory neurons or motor circuits. Here we show that song motor nuclei are an exception to this pattern. The song nuclei of species from all known vocal learning avian lineages showed motor-driven up-regulation of dusp1 expression induced by singing. There was no detectable motor-driven dusp1 expression throughout the rest of the forebrain after non-vocal motor performance. This pattern contrasts with expression of the commonly studied activity-induced gene egr1, which shows motor-driven expression in song nuclei induced by singing, but also motor-driven expression in adjacent brain regions after non-vocal motor behaviors. In the vocal non-learning avian species, we found no detectable vocalizing-driven dusp1 expression in the forebrain. These findings suggest that independent evolutions of neural systems for vocal learning were accompanied by selection for specialized motor-driven expression of the dusp1 gene in those circuits. This specialized expression of dusp1 could potentially lead to differential regulation of dusp1-modulated molecular cascades in vocal learning circuits

    Tourism demand forecasting : a time varying parameter error correction model

    Get PDF
    Accepted ManuscriptPublishe

    Tourism demand modelling and forecasting : a review of recent research

    Get PDF
    2007-2008 > Academic research: refereed > Publication in refereed journalAccepted ManuscriptPublishe
    corecore