7,365 research outputs found
L-Functions for Symmetric Products of Kloosterman Sums
The classical Kloosterman sums give rise to a Galois representation of the
function field unramfied outside 0 and . We study the local monodromy
of this representation at using -adic method based on the work of
Deligne and Katz. As an application, we determine the degrees and the bad
factors of the -functions of the symmetric products of the above
representation. Our results generalize some results of Robba obtained through
-adic method.Comment: 25 page
On Katz's -exponential sums
We deduce Katz's theorems for -exponential sums over finite fields
using -adic cohomology and a theorem of Denef-Loeser, removing the
hypothesis that is relatively prime to the characteristic . In some
degenerate cases, the Betti number estimate is improved using toric
decomposition and Adolphson-Sperber's bound for the degree of -functions.
Applying the facial decomposition theorem in \cite{W1}, we prove that the
universal family of -polynomials is generically ordinary for its
-function when is in certain arithmetic progression
A Class of Incomplete Character Sums
Using -adic cohomology of tensor inductions of lisse -sheaves, we study a class of incomplete character sums.Comment: Following the suggestion of the referee, we use tensor induction to
study a class of incomplete character sums. Originally we use transfer, which
is a special case of tensor induction, and which only works for rank one
sheaves. The paper is to appear in Quarterly Journal of Mathematic
Fast k-means based on KNN Graph
In the era of big data, k-means clustering has been widely adopted as a basic
processing tool in various contexts. However, its computational cost could be
prohibitively high as the data size and the cluster number are large. It is
well known that the processing bottleneck of k-means lies in the operation of
seeking closest centroid in each iteration. In this paper, a novel solution
towards the scalability issue of k-means is presented. In the proposal, k-means
is supported by an approximate k-nearest neighbors graph. In the k-means
iteration, each data sample is only compared to clusters that its nearest
neighbors reside. Since the number of nearest neighbors we consider is much
less than k, the processing cost in this step becomes minor and irrelevant to
k. The processing bottleneck is therefore overcome. The most interesting thing
is that k-nearest neighbor graph is constructed by iteratively calling the fast
-means itself. Comparing with existing fast k-means variants, the proposed
algorithm achieves hundreds to thousands times speed-up while maintaining high
clustering quality. As it is tested on 10 million 512-dimensional data, it
takes only 5.2 hours to produce 1 million clusters. In contrast, to fulfill the
same scale of clustering, it would take 3 years for traditional k-means
- β¦