35 research outputs found
Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials
This paper is our third step towards developing a theory of testing monomials
in multivariate polynomials and concentrates on two problems: (1) How to
compute the coefficients of multilinear monomials; and (2) how to find a
maximum multilinear monomial when the input is a polynomial. We
first prove that the first problem is \#P-hard and then devise a
upper bound for this problem for any polynomial represented by an arithmetic
circuit of size . Later, this upper bound is improved to for
polynomials. We then design fully polynomial-time randomized
approximation schemes for this problem for polynomials. On the
negative side, we prove that, even for polynomials with terms of
degree , the first problem cannot be approximated at all for any
approximation factor , nor {\em "weakly approximated"} in a much relaxed
setting, unless P=NP. For the second problem, we first give a polynomial time
-approximation algorithm for polynomials with terms of
degrees no more a constant . On the inapproximability side, we
give a lower bound, for any on the
approximation factor for polynomials. When terms in these
polynomials are constrained to degrees , we prove a lower
bound, assuming ; and a higher lower bound, assuming the
Unique Games Conjecture
Hilbert's Tenth Problem in Coq (Extended Version)
We formalise the undecidability of solvability of Diophantine equations, i.e.
polynomial equations over natural numbers, in Coq's constructive type theory.
To do so, we give the first full mechanisation of the
Davis-Putnam-Robinson-Matiyasevich theorem, stating that every recursively
enumerable problem -- in our case by a Minsky machine -- is Diophantine. We
obtain an elegant and comprehensible proof by using a synthetic approach to
computability and by introducing Conway's FRACTRAN language as intermediate
layer. Additionally, we prove the reverse direction and show that every
Diophantine relation is recognisable by -recursive functions and give a
certified compiler from -recursive functions to Minsky machines.Comment: submitted to LMC
A discontinuity in pattern inference
This paper examines the learnability of a major subclass
of E-pattern languages – also known as erasing or extended pattern languages
– in Gold’s learning model: We show that the class of terminal-free
E-pattern languages is inferrable from positive data if the corresponding
terminal alphabet consists of three or more letters. Consequently, the
recently presented negative result for binary alphabets is unique
Dynamic block encryption with self-authenticating key exchange
One of the greatest challenges facing cryptographers is the mechanism used
for key exchange. When secret data is transmitted, the chances are that there
may be an attacker who will try to intercept and decrypt the message. Having
done so, he/she might just gain advantage over the information obtained, or
attempt to tamper with the message, and thus, misguiding the recipient.
Both cases are equally fatal and may cause great harm as a consequence.
In cryptography, there are two commonly used methods of exchanging secret
keys between parties. In the first method, symmetric cryptography, the key is
sent in advance, over some secure channel, which only the intended recipient
can read. The second method of key sharing is by using a public key exchange
method, where each party has a private and public key, a public key is shared
and a private key is kept locally. In both cases, keys are exchanged between
two parties.
In this thesis, we propose a method whereby the risk of exchanging keys
is minimised. The key is embedded in the encrypted text using a process
that we call `chirp coding', and recovered by the recipient using a process
that is based on correlation. The `chirp coding parameters' are exchanged
between users by employing a USB flash memory retained by each user. If the
keys are compromised they are still not usable because an attacker can only
have access to part of the key. Alternatively, the software can be configured
to operate in a one time parameter mode, in this mode, the parameters
are agreed upon in advance. There is no parameter exchange during file
transmission, except, of course, the key embedded in ciphertext.
The thesis also introduces a method of encryption which utilises dynamic blocks, where the block size is different for each block. Prime numbers are
used to drive two random number generators: a Linear Congruential Generator
(LCG) which takes in the seed and initialises the system and a Blum-Blum
Shum (BBS) generator which is used to generate random streams to encrypt
messages, images or video clips for example. In each case, the key created is
text dependent and therefore will change as each message is sent.
The scheme presented in this research is composed of five basic modules. The
first module is the key generation module, where the key to be generated is
message dependent. The second module, encryption module, performs data
encryption. The third module, key exchange module, embeds the key into
the encrypted text. Once this is done, the message is transmitted and the
recipient uses the key extraction module to retrieve the key and finally the
decryption module is executed to decrypt the message and authenticate it.
In addition, the message may be compressed before encryption and decompressed
by the recipient after decryption using standard compression tools
Why Philosophers Should Care About Computational Complexity
One might think that, once we know something is computable, how efficiently
it can be computed is a practical question with little further philosophical
importance. In this essay, I offer a detailed case that one would be wrong. In
particular, I argue that computational complexity theory---the field that
studies the resources (such as time, space, and randomness) needed to solve
computational problems---leads to new perspectives on the nature of
mathematical knowledge, the strong AI debate, computationalism, the problem of
logical omniscience, Hume's problem of induction, Goodman's grue riddle, the
foundations of quantum mechanics, economic rationality, closed timelike curves,
and several other topics of philosophical interest. I end by discussing aspects
of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and
beyond," MIT Press, 2012. Some minor clarifications and corrections; new
references adde
Automata theory and formal languages
These lecture notes present some basic notions and results on Automata Theory,
Formal Languages Theory, Computability Theory, and Parsing Theory. I prepared
these notes for a course on Automata, Languages, and Translators which I am
teaching at the University of Roma Tor Vergata. More material on these topics and
on parsing techniques for context-free languages can be found in standard textbooks
such as [1, 8, 9]. The reader is encouraged to look at those books.
A theorem denoted by the triple k.m.n is in Chapter k and Section m, and within
that section it is identified by the number n. Analogous numbering system is used
for algorithms, corollaries, definitions, examples, exercises, figures, and remarks. We
use ‘iff’ to mean ‘if and only if’.
Many thanks to my colleagues of the Department of Informatics, Systems, and
Production of the University of Roma Tor Vergata. I am also grateful to my stu-
dents and co-workers and, in particular, to Lorenzo Clemente, Corrado Di Pietro,
Fulvio Forni, Fabio Lecca, Maurizio Proietti, and Valerio Senni for their help and
encouragement.
Finally, I am grateful to Francesca Di Benedetto, Alessandro Colombo, Donato
Corvaglia, Gioacchino Onorati, and Leonardo Rinaldi of the Aracne Publishing Com-
pany for their kind cooperation
Inner Product Functional Commitments with Constant-Size Public Parameters and Openings
Functional commitments (Libert et al.~[ICALP\u2716]) allow a party to commit to a vector of length and later open the commitment at functions of the committed vector succinctly, namely with communication logarithmic or constant in .
Existing constructions of functional commitments rely on trusted setups and have either openings and parameters, or they have short parameters generatable using public randomness but have -size openings.
In this work, we ask whether it is possible to construct functional commitments in which both parameters and openings can be of constant size.
Our main result is the construction of FC schemes matching this complexity. Our constructions support the evaluation of inner products over small integers; they are built using groups of unknown order and rely on succinct protocols over these groups that are secure in the generic group and random oracle model
Classical-theoretical foundations of computing : a concise textbook
168 p. : ill. ; 29 cm.Includes bibliographical references (p. 167-168)