792 research outputs found

    A full-discrete exponential Euler approximation of invariant measure for parabolic stochastic partial differential equations

    Full text link
    We discrete the ergodic semilinear stochastic partial differential equations in space dimension d≤3d \leq 3 with additive noise, spatially by a spectral Galerkin method and temporally by an exponential Euler scheme. It is shown that both the spatial semi-discretization and the spatio-temporal full discretization are ergodic. Further, convergence orders of the numerical invariant measures, depending on the regularity of noise, are recovered based on an easy time-independent weak error analysis without relying on Malliavin calculus. To be precise, the convergence order is 1−ϵ1-\epsilon in space and 12−ϵ\frac{1}{2}-\epsilon in time for the space-time white noise case and 2−ϵ2-\epsilon in space and 1−ϵ1-\epsilon in time for the trace class noise case in space dimension d=1d = 1, with arbitrarily small ϵ>0\epsilon>0. Numerical results are finally reported to confirm these theoretical findings.Comment: 27 pages, to appear in: Applied Numerical Mathematic

    Item Response Theory based Ensemble in Machine Learning

    Full text link
    In this article, we propose a novel probabilistic framework to improve the accuracy of a weighted majority voting algorithm. In order to assign higher weights to the classifiers which can correctly classify hard-to-classify instances, we introduce the Item Response Theory (IRT) framework to evaluate the samples' difficulty and classifiers' ability simultaneously. Three models are created with different assumptions suitable for different cases. When making an inference, we keep a balance between the accuracy and complexity. In our experiment, all the base models are constructed by single trees via bootstrap. To explain the models, we illustrate how the IRT ensemble model constructs the classifying boundary. We also compare their performance with other widely used methods and show that our model performs well on 19 datasets

    The Bayesian Inversion Problem for Thermal Average Sampling of Quantum Systems

    Full text link
    In this article, we propose a novel method for sampling potential functions based on noisy observation data of a finite number of observables in quantum canonical ensembles, which leads to the accurate sampling of a wide class of test observables. The method is based on the Bayesian inversion framework, which provides a platform for analyzing the posterior distribution and naturally leads to an efficient numerical sampling algorithm. We highlight that, the stability estimate is obtained by treating the potential functions as intermediate variables in the following way: the discrepancy between two sets of observation data of training observables can bound the distance between corresponding posterior distributions of potential functions, while the latter naturally leads to a bound of the discrepancies between corresponding thermal averages of test observables. Besides, the training observables can be more flexible than finite samples of the local density function, which are mostly used in previous researches. The method also applies to the multi-level quantum systems in the non-adiabatic regime. In addition, we provide extensive numerical tests to verify the accuracy and efficiency of the proposed algorithm

    Learning Semantics-aware Distance Map with Semantics Layering Network for Amodal Instance Segmentation

    Full text link
    In this work, we demonstrate yet another approach to tackle the amodal segmentation problem. Specifically, we first introduce a new representation, namely a semantics-aware distance map (sem-dist map), to serve as our target for amodal segmentation instead of the commonly used masks and heatmaps. The sem-dist map is a kind of level-set representation, of which the different regions of an object are placed into different levels on the map according to their visibility. It is a natural extension of masks and heatmaps, where modal, amodal segmentation, as well as depth order information, are all well-described. Then we also introduce a novel convolutional neural network (CNN) architecture, which we refer to as semantic layering network, to estimate sem-dist maps layer by layer, from the global-level to the instance-level, for all objects in an image. Extensive experiments on the COCOA and D2SA datasets have demonstrated that our framework can predict amodal segmentation, occlusion and depth order with state-of-the-art performance.Comment: This paper is submitted to ACMMM1

    Mean-square approximations of L\'{e}vy noise driven SDEs with super-linearly growing diffusion and jump coefficients

    Full text link
    This paper first establishes a fundamental mean-square convergence theorem for general one-step numerical approximations of L\'{e}vy noise driven stochastic differential equations with non-globally Lipschitz coefficients. Then two novel explicit schemes are designed and their convergence rates are exactly identified via the fundamental theorem. Different from existing works, we do not impose a globally Lipschitz condition on the jump coefficient but formulate appropriate assumptions to allow for its super-linear growth. However, we require that the L\'{e}vy measure is finite. New arguments are developed to handle essential difficulties in the convergence analysis, caused by the super-linear growth of the jump coefficient and the fact that higher moment bounds of the Poisson increments \int_t^{t+h} \int_Z \,\bar{N}(\mbox{d}s,\mbox{d}z), t \geq 0, h >0 contribute to magnitude not more than O(h)O(h). Numerical results are finally reported to confirm the theoretical findings.Comment: 34pages, 2 figure

    Large deviations principles of sample paths and invariant measures of numerical methods for parabolic SPDEs

    Full text link
    For parabolic stochastic partial differential equations (SPDEs), we show that the numerical methods, including the spatial spectral Galerkin method and further the full discretization via the temporal accelerated exponential Euler method, satisfy the uniform sample path large deviations. Combining the exponential tail estimate of invariant measures, we establish the large deviations principles (LDPs) of invariant measures of these numerical methods. Based on the error estimate between the rate function of the considered numerical methods and that of the original equation, we prove that these numerical methods can weakly asymptotically preserve the LDPs of sample paths and invariant measures of the original equation. This work provides an approach to proving the weakly asymptotical preservation for the above two LDPs for SPDEs with small noise via numerical methods, by means of the minimization sequences

    Learning to Optimize Tensor Programs

    Full text link
    We introduce a learning-based framework to optimize tensor programs for deep learning workloads. Efficient implementations of tensor operators, such as matrix multiplication and high dimensional convolution, are key enablers of effective deep learning systems. However, existing systems rely on manually optimized libraries such as cuDNN where only a narrow range of server class GPUs are well-supported. The reliance on hardware-specific operator libraries limits the applicability of high-level graph optimizations and incurs significant engineering costs when deploying to new hardware targets. We use learning to remove this engineering burden. We learn domain-specific statistical cost models to guide the search of tensor operator implementations over billions of possible program variants. We further accelerate the search by effective model transfer across workloads. Experimental results show that our framework delivers performance competitive with state-of-the-art hand-tuned libraries for low-power CPU, mobile GPU, and server-class GPU.Comment: NeurIPS 201

    Costly Features Classification using Monte Carlo Tree Search

    Full text link
    We consider the problem of costly feature classification, where we sequentially select the subset of features to make a balance between the classification error and the feature cost. In this paper, we first cast the task into a MDP problem and use Advantage Actor Critic algorithm to solve it. In order to further improve the agent's performance and make the policy explainable, we employ the Monte Carlo Tree Search to update the policy iteratively. During the procedure, we also consider its performance on the unbalanced dataset and its sensitivity to the missing value. We evaluate our model on multiple datasets and find it outperforms other methods

    Photo-Realistic Facial Details Synthesis from Single Image

    Full text link
    We present a single-image 3D face synthesis technique that can handle challenging facial expressions while recovering fine geometric details. Our technique employs expression analysis for proxy face geometry generation and combines supervised and unsupervised learning for facial detail synthesis. On proxy generation, we conduct emotion prediction to determine a new expression-informed proxy. On detail synthesis, we present a Deep Facial Detail Net (DFDN) based on Conditional Generative Adversarial Net (CGAN) that employs both geometry and appearance loss functions. For geometry, we capture 366 high-quality 3D scans from 122 different subjects under 3 facial expressions. For appearance, we use additional 20K in-the-wild face images and apply image-based rendering to accommodate lighting variations. Comprehensive experiments demonstrate that our framework can produce high-quality 3D faces with realistic details under challenging facial expressions

    Relay: A High-Level Compiler for Deep Learning

    Full text link
    Frameworks for writing, compiling, and optimizing deep learning (DL) models have recently enabled progress in areas like computer vision and natural language processing. Extending these frameworks to accommodate the rapidly diversifying landscape of DL models and hardware platforms presents challenging tradeoffs between expressivity, composability, and portability. We present Relay, a new compiler framework for DL. Relay's functional, statically typed intermediate representation (IR) unifies and generalizes existing DL IRs to express state-of-the-art models. The introduction of Relay's expressive IR requires careful design of domain-specific optimizations, addressed via Relay's extension mechanisms. Using these extension mechanisms, Relay supports a unified compiler that can target a variety of hardware platforms. Our evaluation demonstrates Relay's competitive performance for a broad class of models and devices (CPUs, GPUs, and emerging accelerators). Relay's design demonstrates how a unified IR can provide expressivity, composability, and portability without compromising performance
    • …
    corecore