933 research outputs found
LPS-Type Ramanujan Graphs from Definite Quaternion Algebras over of Class Number One
In this paper we construct explicit LPS-type Ramanujan graphs from each
definite quaternion algebra over of class number 1, extending the
constructions of Lubotzky, Phillips, Sarnak, and later Chiu, and answering in
the affirmative a question raised by Jo and Yamasaki. We do this by showing
that for each definite quaternion algebra over of
class number 1 with maximal order , if and is prime such that , then there exists a congruence -arithmetic subgroup of
which acts simply transitively on the Bruhat-Tits tree of .Comment: 15 pages. Comments welcom
The Fell topology and the modular Gromov-Hausdorff propinquity
Given a unital AF-algebra equipped with a faithful tracial state, we
equip each (norm-closed two-sided) ideal of with a metrized quantum vector
bundle structure, when canonically viewed as a module over , in the sense of
Latr\'emoli\`ere using previous work of the first author and Latr\'emoli\`ere.
Moreover, we show that convergence of ideals in the Fell topology implies
convergence of the associated metrized quantum vector bundles in the modular
Gromov-Hausdorff propinquity of Latr\'emoli\`ere. In a similar vein but
requiring a different approach, given a compact metric space , we equip
each ideal of with a metrized quantum vector bundle structure, and show
that convergence in the Fell topology implies convergence in the modular
Gromov-Hausdorff propinquity.Comment: 13 page
Normalization effects on shallow neural networks and related asymptotic expansions
We consider shallow (single hidden layer) neural networks and characterize
their performance when trained with stochastic gradient descent as the number
of hidden units and gradient descent steps grow to infinity. In particular,
we investigate the effect of different scaling schemes, which lead to different
normalizations of the neural network, on the network's statistical output,
closing the gap between the and the mean-field
normalization. We develop an asymptotic expansion for the neural network's
statistical output pointwise with respect to the scaling parameter as the
number of hidden units grows to infinity. Based on this expansion we
demonstrate mathematically that to leading order in there is no
bias-variance trade off, in that both bias and variance (both explicitly
characterized) decrease as the number of hidden units increases and time grows.
In addition, we show that to leading order in , the variance of the neural
network's statistical output decays as the implied normalization by the scaling
parameter approaches the mean field normalization. Numerical studies on the
MNIST and CIFAR10 datasets show that test and train accuracy monotonically
improve as the neural network's normalization gets closer to the mean field
normalization
Normalization effects on deep neural networks
We study the effect of normalization on the layers of deep neural networks of
feed-forward type. A given layer with hidden units is allowed to be
normalized by with and we study
the effect of the choice of the on the statistical behavior of the
neural network's output (such as variance) as well as on the test accuracy on
the MNIST data set. We find that in terms of variance of the neural network's
output and test accuracy the best choice is to choose the 's to be
equal to one, which is the mean-field scaling. We also find that this is
particularly true for the outer layer, in that the neural network's behavior is
more sensitive in the scaling of the outer layer as opposed to the scaling of
the inner layers. The mechanism for the mathematical analysis is an asymptotic
expansion for the neural network's output. An important practical consequence
of the analysis is that it provides a systematic and mathematically informed
way to choose the learning rate hyperparameters. Such a choice guarantees that
the neural network behaves in a statistically robust way as the grow to
infinity.Comment: arXiv admin note: text overlap with arXiv:2011.1048
Transfer of derived equivalences from subalgebras to endomorphism algebras II
We investigate derived equivalences between subalgebras of some
-Auslander-Yoneda algebras from a class of -angles in weakly
-angulated categories. The derived equivalences are obtained by transferring
subalgebras induced by -angles to endomorphism algebras induced by
approximation sequences. Then we extend our constructions \cite{BP} to
-angle cases. Finally, we give an explicit example to illustrate our result.Comment: All comments are welcome. The paper has been submitted. Some errors
are corrected. arXiv admin note: text overlap with arXiv:1905.1129
- β¦