2,670 research outputs found
Idempotent permutations
Together with a characteristic function, idempotent permutations uniquely
determine idempotent maps, as well as their linearly ordered arrangement
simultaneously. Furthermore, in-place linear time transformations are possible
between them. Hence, they may be important for succinct data structures,
information storing, sorting and searching.
In this study, their combinatorial interpretation is given and their
application on sorting is examined. Given an array of n integer keys each in
[1,n], if it is allowed to modify the keys in the range [-n,n], idempotent
permutations make it possible to obtain linearly ordered arrangement of the
keys in O(n) time using only 4log(n) bits, setting the theoretical lower bound
of time and space complexity of sorting. If it is not allowed to modify the
keys out of the range [1,n], then n+4log(n) bits are required where n of them
is used to tag some of the keys.Comment: 32 page
On Euclidean Norm Approximations
Euclidean norm calculations arise frequently in scientific and engineering
applications. Several approximations for this norm with differing complexity
and accuracy have been proposed in the literature. Earlier approaches were
based on minimizing the maximum error. Recently, Seol and Cheun proposed an
approximation based on minimizing the average error. In this paper, we first
examine these approximations in detail, show that they fit into a single
mathematical formulation, and compare their average and maximum errors. We then
show that the maximum errors given by Seol and Cheun are significantly
optimistic.Comment: 9 pages, 1 figure, Pattern Recognitio
Cosine Similarity Measure According to a Convex Cost Function
In this paper, we describe a new vector similarity measure associated with a
convex cost function. Given two vectors, we determine the surface normals of
the convex function at the vectors. The angle between the two surface normals
is the similarity measure. Convex cost function can be the negative entropy
function, total variation (TV) function and filtered variation function. The
convex cost function need not be differentiable everywhere. In general, we need
to compute the gradient of the cost function to compute the surface normals. If
the gradient does not exist at a given vector, it is possible to use the
subgradients and the normal producing the smallest angle between the two
vectors is used to compute the similarity measure
- …