8,620 research outputs found

    Resolutions for principal series representations of p-adic GL(n)

    Full text link
    Let F be a nonarchimedean locally compact field with residue characteristic p and G(F) the group of F-rational points of a connected reductive group. Following Schneider and Stuhler, one can realize, in a functorial way, any smooth complex finitely generated representation of G(F) as the 0-homology of a certain coefficient system on the semi-simple building of G(F). It is known that this method does not apply in general for smooth mod p representations of G(F), even when G= GL(2). However, we prove that a principal series representation of GL(n,F) over a field with arbitrary characteristic can be realized as the 0-homology of the corresponding coefficient system

    Auto-encoders: reconstruction versus compression

    Full text link
    We discuss the similarities and differences between training an auto-encoder to minimize the reconstruction error, and training the same auto-encoder to compress the data via a generative model. Minimizing a codelength for the data using an auto-encoder is equivalent to minimizing the reconstruction error plus some correcting terms which have an interpretation as either a denoising or contractive property of the decoding function. These terms are related but not identical to those used in denoising or contractive auto-encoders [Vincent et al. 2010, Rifai et al. 2011]. In particular, the codelength viewpoint fully determines an optimal noise level for the denoising criterion

    Online Natural Gradient as a Kalman Filter

    Full text link
    We cast Amari's natural gradient in statistical learning as a specific case of Kalman filtering. Namely, applying an extended Kalman filter to estimate a fixed unknown parameter of a probabilistic model from a series of observations, is rigorously equivalent to estimating this parameter via an online stochastic natural gradient descent on the log-likelihood of the observations. In the i.i.d. case, this relation is a consequence of the "information filter" phrasing of the extended Kalman filter. In the recurrent (state space, non-i.i.d.) case, we prove that the joint Kalman filter over states and parameters is a natural gradient on top of real-time recurrent learning (RTRL), a classical algorithm to train recurrent models. This exact algebraic correspondence provides relevant interpretations for natural gradient hyperparameters such as learning rates or initialization and regularization of the Fisher information matrix.Comment: 3rd version: expanded intr

    An inverse Satake isomorphism in characteristic p

    Full text link
    Let F be a local field with finite residue field of characteristic p and k an algebraic closure of the residue field. Let G be the group of F-points of a F-split connected reductive group. In the apartment corresponding to a chosen maximal split torus of T, we fix a hyperspecial vertex and denote by K the corresponding maximal compact subgroup of G. Given an irreducible smooth k-representation ρ\rho of K, we construct an isomorphism from the affine semigroup k-algebra of the dominant cocharacters of T onto the Hecke algebra H(G,ρ)H(G, \rho). In the case when the derived subgroup of G is simply connected, we prove furthermore that our isomorphism is the inverse to the Satake isomorphism constructed by Herzig
    corecore