12,614 research outputs found
Lossy Kernelization
In this paper we propose a new framework for analyzing the performance of
preprocessing algorithms. Our framework builds on the notion of kernelization
from parameterized complexity. However, as opposed to the original notion of
kernelization, our definitions combine well with approximation algorithms and
heuristics. The key new definition is that of a polynomial size
-approximate kernel. Loosely speaking, a polynomial size
-approximate kernel is a polynomial time pre-processing algorithm that
takes as input an instance to a parameterized problem, and outputs
another instance to the same problem, such that . Additionally, for every , a -approximate solution
to the pre-processed instance can be turned in polynomial time into a
-approximate solution to the original instance .
Our main technical contribution are -approximate kernels of
polynomial size for three problems, namely Connected Vertex Cover, Disjoint
Cycle Packing and Disjoint Factors. These problems are known not to admit any
polynomial size kernels unless . Our approximate
kernels simultaneously beat both the lower bounds on the (normal) kernel size,
and the hardness of approximation lower bounds for all three problems. On the
negative side we prove that Longest Path parameterized by the length of the
path and Set Cover parameterized by the universe size do not admit even an
-approximate kernel of polynomial size, for any , unless
. In order to prove this lower bound we need to combine
in a non-trivial way the techniques used for showing kernelization lower bounds
with the methods for showing hardness of approximationComment: 58 pages. Version 2 contain new results: PSAKS for Cycle Packing and
approximate kernel lower bounds for Set Cover and Hitting Set parameterized
by universe siz
Randomness Quality of CI Chaotic Generators: Applications to Internet Security
Due to the rapid development of the Internet in recent years, the need to
find new tools to reinforce trust and security through the Internet has became
a major concern. The discovery of new pseudo-random number generators with a
strong level of security is thus becoming a hot topic, because numerous
cryptosystems and data hiding schemes are directly dependent on the quality of
these generators. At the conference Internet`09, we have described a generator
based on chaotic iterations, which behaves chaotically as defined by Devaney.
In this paper, the proposal is to improve the speed and the security of this
generator, to make its use more relevant in the Internet security context. To
do so, a comparative study between various generators is carried out and
statistical results are given. Finally, an application in the information
hiding framework is presented, to give an illustrative example of the use of
such a generator in the Internet security field.Comment: 6 pages,6 figures, In INTERNET'2010. The 2nd Int. Conf. on Evolving
Internet, Valencia, Spain, pages 125-130, September 2010. IEEE Computer
Society Press Note: Best Paper awar
Content addressable memory project
A parameterized version of the tree processor was designed and tested (by simulation). The leaf processor design is 90 percent complete. We expect to complete and test a combination of tree and leaf cell designs in the next period. Work is proceeding on algorithms for the computer aided manufacturing (CAM), and once the design is complete we will begin simulating algorithms for large problems. The following topics are covered: (1) the practical implementation of content addressable memory; (2) design of a LEAF cell for the Rutgers CAM architecture; (3) a circuit design tool user's manual; and (4) design and analysis of efficient hierarchical interconnection networks
Analysis of Carries in Signed Digit Expansions
The number of positive and negative carries in the addition of two
independent random signed digit expansions of given length is analyzed
asymptotically for the -system and the symmetric signed digit
expansion. The results include expectation, variance, covariance between the
positive and negative carries and a central limit theorem.
Dependencies between the digits require determining suitable transition
probabilities to obtain equidistribution on all expansions of given length. A
general procedure is described to obtain such transition probabilities for
arbitrary regular languages.
The number of iterations in von Neumann's parallel addition method for the
symmetric signed digit expansion is also analyzed, again including expectation,
variance and convergence to a double exponential limiting distribution. This
analysis is carried out in a general framework for sequences of generating
functions
- …