21,994 research outputs found

    Small-time asymptotics for fast mean-reverting stochastic volatility models

    Get PDF
    In this paper, we study stochastic volatility models in regimes where the maturity is small, but large compared to the mean-reversion time of the stochastic volatility factor. The problem falls in the class of averaging/homogenization problems for nonlinear HJB-type equations where the "fast variable" lives in a noncompact space. We develop a general argument based on viscosity solutions which we apply to the two regimes studied in the paper. We derive a large deviation principle, and we deduce asymptotic prices for out-of-the-money call and put options, and their corresponding implied volatilities. The results of this paper generalize the ones obtained in Feng, Forde and Fouque [SIAM J. Financial Math. 1 (2010) 126-141] by a moment generating function computation in the particular case of the Heston model.Comment: Published in at http://dx.doi.org/10.1214/11-AAP801 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Facile and time-resolved chemical growth of nanoporous CaxCoO2 thin films for flexible and thermoelectric applications

    Full text link
    CaxCoO2 thin films can be promising for widespread flexible thermoelectric applications in a wide temperature range from room-temperature self-powered wearable applications (by harvesting power from body heat) to energy harvesting from hot surfaces (e.g., hot pipes) if a cost-effective and facile growth technique is developed. Here, we demonstrate a time resolved, facile and ligand-free soft chemical method for the growth of nanoporous Ca0.35CoO2 thin films on sapphire and mica substrates from a water-based precursor ink, composed of in-situ prepared Ca2+-DMF and Co2+-DMF complexes. Mica serves as flexible substrate as well as sacrificial layer for film transfer. The grown films are oriented and can sustain bending stress until a bending radius of 15 mm. Despite the presence of nanopores, the power factor of Ca0.35CoO2 film is found to be as high as 0.50 x 10-4 Wm-1K-2 near room temperature. The present technique, being simple and fast to be potentially suitable for cost-effective industrial upscaling.Comment: 16 pages, 5 figure

    Distributed Machine Learning via Sufficient Factor Broadcasting

    Full text link
    Matrix-parametrized models, including multiclass logistic regression and sparse coding, are used in machine learning (ML) applications ranging from computer vision to computational biology. When these models are applied to large-scale ML problems starting at millions of samples and tens of thousands of classes, their parameter matrix can grow at an unexpected rate, resulting in high parameter synchronization costs that greatly slow down distributed learning. To address this issue, we propose a Sufficient Factor Broadcasting (SFB) computation model for efficient distributed learning of a large family of matrix-parameterized models, which share the following property: the parameter update computed on each data sample is a rank-1 matrix, i.e., the outer product of two "sufficient factors" (SFs). By broadcasting the SFs among worker machines and reconstructing the update matrices locally at each worker, SFB improves communication efficiency --- communication costs are linear in the parameter matrix's dimensions, rather than quadratic --- without affecting computational correctness. We present a theoretical convergence analysis of SFB, and empirically corroborate its efficiency on four different matrix-parametrized ML models

    Controlled atom transfer radical polymerization of MMA onto the surface of high-density functionalized graphene oxide

    Get PDF
    We report on the grafting of poly(methyl methacrylate) (PMMA) onto the surface of high-density functionalized graphene oxides (GO) through controlled radical polymerization (CRP). To increase the density of surface grafting, GO was first diazotized (DGO), followed by esterification with 2-bromoisobutyryl bromide, which resulted in an atom transfer radical polymerization (ATRP) initiator-functionalized DGO-Br. The functionalized DGO-Br was characterized by X-ray photoelectron spectroscopy (XPS), Raman, and XRD patterns. PMMA chains were then grafted onto the DGO-Br surface through a ‘grafting from’ technique using ATRP. Gel permeation chromatography (GPC) results revealed that polymerization of methyl methacrylate (MMA) follows CRP. Thermal studies show that the resulting graphene-PMMA nanocomposites have higher thermal stability and glass transition temperatures (T(g)) than those of pristine PMMA

    Spin-Wave and Electromagnon Dispersions in Multiferroic MnWO4 as Observed by Neutron Spectroscopy: Isotropic Heisenberg Exchange versus Anisotropic Dzyaloshinskii-Moriya Interaction

    Get PDF
    High resolution inelastic neutron scattering reveals that the elementary magnetic excitations in multiferroic MnWO4 consist of low energy dispersive electromagnons in addition to the well-known spin-wave excitations. The latter can well be modeled by a Heisenberg Hamiltonian with magnetic exchange coupling extending to the 12th nearest neighbor. They exhibit a spin-wave gap of 0.61(1) meV. Two electromagnon branches appear at lower energies of 0.07(1) meV and 0.45(1) meV at the zone center. They reflect the dynamic magnetoelectric coupling and persist in both, the collinear magnetic and paraelectric AF1 phase, and the spin spiral ferroelectric AF2 phase. These excitations are associated with the Dzyaloshinskii-Moriya exchange interaction, which is significant due to the rather large spin-orbit coupling.Comment: 8 pages, 6 figures, accepted for publication in Physical Review

    Bagging in overparameterized learning: Risk characterization and risk monotonization

    Full text link
    Bagging is a commonly used ensemble technique in statistics and machine learning to improve the performance of prediction procedures. In this paper, we study the prediction risk of variants of bagged predictors under the proportional asymptotics regime, in which the ratio of the number of features to the number of observations converges to a constant. Specifically, we propose a general strategy to analyze the prediction risk under squared error loss of bagged predictors using classical results on simple random sampling. Specializing the strategy, we derive the exact asymptotic risk of the bagged ridge and ridgeless predictors with an arbitrary number of bags under a well-specified linear model with arbitrary feature covariance matrices and signal vectors. Furthermore, we prescribe a generic cross-validation procedure to select the optimal subsample size for bagging and discuss its utility to eliminate the non-monotonic behavior of the limiting risk in the sample size (i.e., double or multiple descents). In demonstrating the proposed procedure for bagged ridge and ridgeless predictors, we thoroughly investigate the oracle properties of the optimal subsample size and provide an in-depth comparison between different bagging variants.Comment: 100 pages, 34 figures; this version does slight reorganization and fixes minor typo

    Petuum: A New Platform for Distributed Machine Learning on Big Data

    Full text link
    What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, allowing ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.Comment: 15 pages, 10 figures, final version in KDD 2015 under the same titl
    • …
    corecore