3,133 research outputs found
A low multiplicative complexity fast recursive DCT-2 algorithm
A fast Discrete Cosine Transform (DCT) algorithm is introduced that can be of
particular interest in image processing. The main features of the algorithm are
regularity of the graph and very low arithmetic complexity. The 16-point
version of the algorithm requires only 32 multiplications and 81 additions. The
computational core of the algorithm consists of only 17 nontrivial
multiplications, the rest 15 are scaling factors that can be compensated in the
post-processing. The derivation of the algorithm is based on the algebraic
signal processing theory (ASP).Comment: 4 pages, 2 figure
Poncelet's Theorem, Paraorthogonal Polynomials and the Numerical Range of Compressed Multiplication Operators
There has been considerable recent literature connecting Poncelet's theorem
to ellipses, Blaschke products and numerical ranges, summarized, for example,
in the recent book [11]. We show how those results can be understood using
ideas from the theory of orthogonal polynomials on the unit circle (OPUC) and,
in turn, can provide new insights to the theory of OPUC.Comment: 46 pages, 4 figures; minor revisions from v1; accepted for
publication in Adv. Mat
An investigation of data compression techniques for hyperspectral core imager data
We investigate algorithms for tractable analysis of real hyperspectral image data from core samples provided by AngloGold Ashanti. In particular, we investigate feature extraction, non-linear dimension reduction using diffusion maps and wavelet approximation methods on our data
P?=NP as minimization of degree 4 polynomial, integration or Grassmann number problem, and new graph isomorphism problem approaches
While the P vs NP problem is mainly approached form the point of view of
discrete mathematics, this paper proposes reformulations into the field of
abstract algebra, geometry, fourier analysis and of continuous global
optimization - which advanced tools might bring new perspectives and approaches
for this question. The first one is equivalence of satisfaction of 3-SAT
problem with the question of reaching zero of a nonnegative degree 4
multivariate polynomial (sum of squares), what could be tested from the
perspective of algebra by using discriminant. It could be also approached as a
continuous global optimization problem inside , for example in
physical realizations like adiabatic quantum computers. However, the number of
local minima usually grows exponentially. Reducing to degree 2 polynomial plus
constraints of being in , we get geometric formulations as the
question if plane or sphere intersects with . There will be also
presented some non-standard perspectives for the Subset-Sum, like through
convergence of a series, or zeroing of fourier-type integral for some natural . The last discussed
approach is using anti-commuting Grassmann numbers , making nonzero only if has a Hamilton cycle. Hence,
the PNP assumption implies exponential growth of matrix representation of
Grassmann numbers. There will be also discussed a looking promising
algebraic/geometric approach to the graph isomorphism problem -- tested to
successfully distinguish strongly regular graphs with up to 29 vertices.Comment: 19 pages, 8 figure
Optimal Sparsification for Some Binary CSPs Using Low-degree Polynomials
This paper analyzes to what extent it is possible to efficiently reduce the
number of clauses in NP-hard satisfiability problems, without changing the
answer. Upper and lower bounds are established using the concept of
kernelization. Existing results show that if NP is not contained in coNP/poly,
no efficient preprocessing algorithm can reduce n-variable instances of CNF-SAT
with d literals per clause, to equivalent instances with bits for
any e > 0. For the Not-All-Equal SAT problem, a compression to size
exists. We put these results in a common framework by analyzing
the compressibility of binary CSPs. We characterize constraint types based on
the minimum degree of multivariate polynomials whose roots correspond to the
satisfying assignments, obtaining (nearly) matching upper and lower bounds in
several settings. Our lower bounds show that not just the number of
constraints, but also the encoding size of individual constraints plays an
important role. For example, for Exact Satisfiability with unbounded clause
length it is possible to efficiently reduce the number of constraints to n+1,
yet no polynomial-time algorithm can reduce to an equivalent instance with
bits for any e > 0, unless NP is a subset of coNP/poly.Comment: Updated the cross-composition in lemma 18 (minor update), since the
previous version did NOT satisfy requirement 4 of lemma 18 (the proof of
Claim 20 was incorrect
- β¦