9,571 research outputs found

    Approximation in Lp(μ)L^p(\mu) with deep ReLU neural networks

    Full text link
    We discuss the expressive power of neural networks which use the non-smooth ReLU activation function ϱ(x)=max{0,x}\varrho(x) = \max\{0,x\} by analyzing the approximation theoretic properties of such networks. The existing results mainly fall into two categories: approximation using ReLU networks with a fixed depth, or using ReLU networks whose depth increases with the approximation accuracy. After reviewing these findings, we show that the results concerning networks with fixed depth--- which up to now only consider approximation in Lp(λ)L^p(\lambda) for the Lebesgue measure λ\lambda--- can be generalized to approximation in Lp(μ)L^p(\mu), for any finite Borel measure μ\mu. In particular, the generalized results apply in the usual setting of statistical learning theory, where one is interested in approximation in L2(P)L^2(\mathbb{P}), with the probability measure P\mathbb{P} describing the distribution of the data.Comment: Accepted for presentation at SampTA 201

    Auto-encoders: reconstruction versus compression

    Full text link
    We discuss the similarities and differences between training an auto-encoder to minimize the reconstruction error, and training the same auto-encoder to compress the data via a generative model. Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms which have an interpretation as either a denoising or contractive property of the decoding function. These terms are related but not identical to those used in denoising or contractive auto-encoders [Vincent et al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully determines an optimal noise level for the denoising criterion
    corecore