933 research outputs found

    LPS-Type Ramanujan Graphs from Definite Quaternion Algebras over Q\mathbb Q of Class Number One

    Full text link
    In this paper we construct explicit LPS-type Ramanujan graphs from each definite quaternion algebra over Q\mathbb Q of class number 1, extending the constructions of Lubotzky, Phillips, Sarnak, and later Chiu, and answering in the affirmative a question raised by Jo and Yamasaki. We do this by showing that for each definite quaternion algebra H\mathcal H over Q\mathbb Q of class number 1 with maximal order O\mathcal O, if G=HΓ—/Z(HΓ—)G = \mathcal H^\times/Z(\mathcal H^\times) and pp is prime such that G(Qp)β‰…PGL2(Qp)G(\mathbb Q_p) \cong PGL_2(\mathbb Q_p), then there exists a congruence pp-arithmetic subgroup of GG which acts simply transitively on the Bruhat-Tits tree of G(Qp)G(\mathbb Q_p).Comment: 15 pages. Comments welcom

    The Fell topology and the modular Gromov-Hausdorff propinquity

    Full text link
    Given a unital AF-algebra AA equipped with a faithful tracial state, we equip each (norm-closed two-sided) ideal of AA with a metrized quantum vector bundle structure, when canonically viewed as a module over AA, in the sense of Latr\'emoli\`ere using previous work of the first author and Latr\'emoli\`ere. Moreover, we show that convergence of ideals in the Fell topology implies convergence of the associated metrized quantum vector bundles in the modular Gromov-Hausdorff propinquity of Latr\'emoli\`ere. In a similar vein but requiring a different approach, given a compact metric space (X,d)(X,d), we equip each ideal of C(X)C(X) with a metrized quantum vector bundle structure, and show that convergence in the Fell topology implies convergence in the modular Gromov-Hausdorff propinquity.Comment: 13 page

    Normalization effects on shallow neural networks and related asymptotic expansions

    Full text link
    We consider shallow (single hidden layer) neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units NN and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the 1/N1/\sqrt{N} and the mean-field 1/N1/N normalization. We develop an asymptotic expansion for the neural network's statistical output pointwise with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion we demonstrate mathematically that to leading order in NN there is no bias-variance trade off, in that both bias and variance (both explicitly characterized) decrease as the number of hidden units increases and time grows. In addition, we show that to leading order in NN, the variance of the neural network's statistical output decays as the implied normalization by the scaling parameter approaches the mean field normalization. Numerical studies on the MNIST and CIFAR10 datasets show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization

    Normalization effects on deep neural networks

    Full text link
    We study the effect of normalization on the layers of deep neural networks of feed-forward type. A given layer ii with NiN_{i} hidden units is allowed to be normalized by 1/Niγi1/N_{i}^{\gamma_{i}} with γi∈[1/2,1]\gamma_{i}\in[1/2,1] and we study the effect of the choice of the γi\gamma_{i} on the statistical behavior of the neural network's output (such as variance) as well as on the test accuracy on the MNIST data set. We find that in terms of variance of the neural network's output and test accuracy the best choice is to choose the γi\gamma_{i}'s to be equal to one, which is the mean-field scaling. We also find that this is particularly true for the outer layer, in that the neural network's behavior is more sensitive in the scaling of the outer layer as opposed to the scaling of the inner layers. The mechanism for the mathematical analysis is an asymptotic expansion for the neural network's output. An important practical consequence of the analysis is that it provides a systematic and mathematically informed way to choose the learning rate hyperparameters. Such a choice guarantees that the neural network behaves in a statistically robust way as the NiN_i grow to infinity.Comment: arXiv admin note: text overlap with arXiv:2011.1048

    Transfer of derived equivalences from subalgebras to endomorphism algebras II

    Full text link
    We investigate derived equivalences between subalgebras of some Ξ¦\Phi-Auslander-Yoneda algebras from a class of nn-angles in weakly nn-angulated categories. The derived equivalences are obtained by transferring subalgebras induced by nn-angles to endomorphism algebras induced by approximation sequences. Then we extend our constructions \cite{BP} to nn-angle cases. Finally, we give an explicit example to illustrate our result.Comment: All comments are welcome. The paper has been submitted. Some errors are corrected. arXiv admin note: text overlap with arXiv:1905.1129
    • …
    corecore