8,478 research outputs found

    Smolyak's algorithm: A powerful black box for the acceleration of scientific computations

    Full text link
    We provide a general discussion of Smolyak's algorithm for the acceleration of scientific computations. The algorithm first appeared in Smolyak's work on multidimensional integration and interpolation. Since then, it has been generalized in multiple directions and has been associated with the keywords: sparse grids, hyperbolic cross approximation, combination technique, and multilevel methods. Variants of Smolyak's algorithm have been employed in the computation of high-dimensional integrals in finance, chemistry, and physics, in the numerical solution of partial and stochastic differential equations, and in uncertainty quantification. Motivated by this broad and ever-increasing range of applications, we describe a general framework that summarizes fundamental results and assumptions in a concise application-independent manner

    A FEASIBLE AND OBJECTIVE CONCEPT OF OPTIMALITY: THE QUADRATIC LOSS FUNCTION AND U. S. MONETARY POLICY IN THE 1960's

    Get PDF
    The introduction of linear-quadratic methods in monetary economics in the 1960s tinged the intense debate about the optimal monetary policy instrument. These methods were widely used outside monetary economics because they delivered easy solutions to complex stochastic models. This same reason explains the success of quadratic loss functions according to the conventional wisdom among monetary economists. In this traditional narrative, Henri Theil and Herbert Simon are often cited by their proofs that models with quadratic objective functions have the certainty equivalence property. This attribute made the solution of these models feasible for the computers available at that time. This paper shows how the use of a quadratic loss function to characterize the behavior of central banks inaugurated an objective or uniform way of talking about optimality. In this respect, the discourse on optimal monetary policy stabilized. Moreover, a richer account of the quadratic approach to monetary policy debate emerges by analyzing how quadratic loss functions were used in operations research and management problems by groups of scientists that included economists like Modigliani and Simon. I argue that feasibility is only one important factor that explains the wide popularity of quadratic functions in monetary economics.

    Transient LTRE analysis reveals the demographic and trait-mediated processes that buffer population growth.

    Get PDF
    Temporal variation in environmental conditions affects population growth directly via its impact on vital rates, and indirectly through induced variation in demographic structure and phenotypic trait distributions. We currently know very little about how these processes jointly mediate population responses to their environment. To address this gap, we develop a general transient life table response experiment (LTRE) which partitions the contributions to population growth arising from variation in (1) survival and reproduction, (2) demographic structure, (3) trait values and (4) climatic drivers. We apply the LTRE to a population of yellow-bellied marmots (Marmota flaviventer) to demonstrate the impact of demographic and trait-mediated processes. Our analysis provides a new perspective on demographic buffering, which may be a more subtle phenomena than is currently assumed. The new LTRE framework presents opportunities to improve our understanding of how trait variation influences population dynamics and adaptation in stochastic environments

    Efficient Learning of Sparse Conditional Random Fields for Supervised Sequence Labelling

    Full text link
    Conditional Random Fields (CRFs) constitute a popular and efficient approach for supervised sequence labelling. CRFs can cope with large description spaces and can integrate some form of structural dependency between labels. In this contribution, we address the issue of efficient feature selection for CRFs based on imposing sparsity through an L1 penalty. We first show how sparsity of the parameter set can be exploited to significantly speed up training and labelling. We then introduce coordinate descent parameter update schemes for CRFs with L1 regularization. We finally provide some empirical comparisons of the proposed approach with state-of-the-art CRF training strategies. In particular, it is shown that the proposed approach is able to take profit of the sparsity to speed up processing and hence potentially handle larger dimensional models

    Multiclass latent locally linear support vector machines

    Get PDF
    Kernelized Support Vector Machines (SVM) have gained the status of off-the-shelf classifiers, able to deliver state of the art performance on almost any problem. Still, their practical use is constrained by their computational and memory complexity, which grows super-linearly with the number of training samples. In order to retain the low training and testing complexity of linear classifiers and the exibility of non linear ones, a growing, promising alternative is represented by methods that learn non-linear classifiers through local combinations of linear ones. In this paper we propose a new multi class local classifier, based on a latent SVM formulation. The proposed classifier makes use of a set of linear models that are linearly combined using sample and class specific weights. Thanks to the latent formulation, the combination coefficients are modeled as latent variables. We allow soft combinations and we provide a closed-form solution for their estimation, resulting in an efficient prediction rule. This novel formulation allows to learn in a principled way the sample specific weights and the linear classifiers, in a unique optimization problem, using a CCCP optimization procedure. Extensive experiments on ten standard UCI machine learning datasets, one large binary dataset, three character and digit recognition databases, and a visual place categorization dataset show the power of the proposed approach

    Comparison of Different Parallel Implementations of the 2+1-Dimensional KPZ Model and the 3-Dimensional KMC Model

    Full text link
    We show that efficient simulations of the Kardar-Parisi-Zhang interface growth in 2 + 1 dimensions and of the 3-dimensional Kinetic Monte Carlo of thermally activated diffusion can be realized both on GPUs and modern CPUs. In this article we present results of different implementations on GPUs using CUDA and OpenCL and also on CPUs using OpenCL and MPI. We investigate the runtime and scaling behavior on different architectures to find optimal solutions for solving current simulation problems in the field of statistical physics and materials science.Comment: 14 pages, 8 figures, to be published in a forthcoming EPJST special issue on "Computer simulations on GPU
    corecore