112 research outputs found
Counting Number Fields by Discriminant
The central topic of this dissertation is counting number fields ordered by discriminant. We fix a base field k and let Nd(k,G;X) be the number of extensions N/k up to isomorphism with Nk/Q(dN/k) ≤ X, [N : k] = d and the Galois closure of N/k is equal to G.
We establish two main results in this work. In the first result we establish upper bounds for N|G| (k,G;X) in the case that G is a finite group with an abelian normal subgroup. Further, we establish upper bounds for the case N |F| (k,G;X) where G is a Frobenius group with an abelian Frobenius kernel F.
In the second result we establish is an asymptotic expression for N6(Q;A4;X). We show that N6(Q,A4;X) = CX1/2 + O(X0.426...) and indicate what is expecedted under the `-torsion conjecture and the Lindelöf Hypothesis.
We begin this work by stating the results that are established here precisely, and giving a historical overview of the problem of counting number fields.
In Chapter 2, we establish background material in the areas of ramification of prime numbers and analytic number theory.
In Chapter 3, we establish the asymptotic result for N6(Q,A4;X).
In Chapter 4, we establish upper bounds for Nd(k,G;X) for groups with a normal abelian subgroup and for Frobenius groups. Finally we conclude with Chapter 5 with certain extensions of the method. In particular, we indicate how to count extensions of different degrees and discuss how to use tools about average results on the size of the torsion of the class group on almost all extensions in a certain family
A Survey of Word Reordering Model in Statistical Machine Translation
Machine translation is the process of translating one natural language in to another natural language by computers. In statistical machine translation word reordering is a big challenge between distant language pair. It is important factor for its quality and efficiency. Word reordering is major challenge For Indian languages who have big structural difference like English and Hindi language. This paper present description about statistical machine translation, reordering model and reordering types
Convexifying Transformers: Improving optimization and understanding of transformer networks
Understanding the fundamental mechanism behind the success of transformer
networks is still an open problem in the deep learning literature. Although
their remarkable performance has been mostly attributed to the self-attention
mechanism, the literature still lacks a solid analysis of these networks and
interpretation of the functions learned by them. To this end, we study the
training problem of attention/transformer networks and introduce a novel convex
analytic approach to improve the understanding and optimization of these
networks. Particularly, we first introduce a convex alternative to the
self-attention mechanism and reformulate the regularized training problem of
transformer networks with our alternative convex attention. Then, we cast the
reformulation as a convex optimization problem that is interpretable and easier
to optimize. Moreover, as a byproduct of our convex analysis, we reveal an
implicit regularization mechanism, which promotes sparsity across tokens.
Therefore, we not only improve the optimization of attention/transformer
networks but also provide a solid theoretical understanding of the functions
learned by them. We also demonstrate the effectiveness of our theory through
several numerical experiments
Mechanic: A Learning Rate Tuner
We introduce a technique for tuning the learning rate scale factor of any
base optimization algorithm and schedule automatically, which we call
\textsc{mechanic}. Our method provides a practical realization of recent
theoretical reductions for accomplishing a similar goal in online convex
optimization. We rigorously evaluate \textsc{mechanic} on a range of large
scale deep learning tasks with varying batch sizes, schedules, and base
optimization algorithms. These experiments demonstrate that depending on the
problem, \textsc{mechanic} either comes very close to, matches or even improves
upon manual tuning of learning rates
- …