4,312 research outputs found
Average-Case Complexity
We survey the average-case complexity of problems in NP.
We discuss various notions of good-on-average algorithms, and present
completeness results due to Impagliazzo and Levin. Such completeness results
establish the fact that if a certain specific (but somewhat artificial) NP
problem is easy-on-average with respect to the uniform distribution, then all
problems in NP are easy-on-average with respect to all samplable distributions.
Applying the theory to natural distributional problems remain an outstanding
open question. We review some natural distributional problems whose
average-case complexity is of particular interest and that do not yet fit into
this theory.
A major open question whether the existence of hard-on-average problems in NP
can be based on the PNP assumption or on related worst-case assumptions.
We review negative results showing that certain proof techniques cannot prove
such a result. While the relation between worst-case and average-case
complexity for general NP problems remains open, there has been progress in
understanding the relation between different ``degrees'' of average-case
complexity. We discuss some of these ``hardness amplification'' results
Heuristic algorithms for the Longest Filled Common Subsequence Problem
At CPM 2017, Castelli et al. define and study a new variant of the Longest
Common Subsequence Problem, termed the Longest Filled Common Subsequence
Problem (LFCS). For the LFCS problem, the input consists of two strings and
and a multiset of characters . The goal is to insert the
characters from into the string , thus obtaining a new string
, such that the Longest Common Subsequence (LCS) between and is
maximized. Casteli et al. show that the problem is NP-hard and provide a
3/5-approximation algorithm for the problem.
In this paper we study the problem from the experimental point of view. We
introduce, implement and test new heuristic algorithms and compare them with
the approximation algorithm of Casteli et al. Moreover, we introduce an Integer
Linear Program (ILP) model for the problem and we use the state of the art ILP
solver, Gurobi, to obtain exact solution for moderate sized instances.Comment: Accepted and presented as a proceedings paper at SYNASC 201
The problem with the SURF scheme
There is a serious problem with one of the assumptions made in the security
proof of the SURF scheme. This problem turns out to be easy in the regime of
parameters needed for the SURF scheme to work.
We give afterwards the old version of the paper for the reader's convenience.Comment: Warning : we found a serious problem in the security proof of the
SURF scheme. We explain this problem here and give the old version of the
paper afterward
Cryptography from tensor problems
We describe a new proposal for a trap-door one-way function. The new proposal belongs to the "multivariate quadratic" family but the trap-door is different from existing methods, and is simpler
Partition strategies for incremental Mini-Bucket
Los modelos en grafo probabilÃsticos, tales como los campos aleatorios de
Markov y las redes bayesianas, ofrecen poderosos marcos de trabajo para la
representación de conocimiento y el razonamiento en modelos con gran número
de variables. Sin embargo, los problemas de inferencia exacta en modelos de
grafos son NP-hard en general, lo que ha causado que se produzca bastante
interés en métodos de inferencia aproximados.
El mini-bucket incremental es un marco de trabajo para inferencia aproximada
que produce como resultado lÃmites aproximados inferior y superior de la
función de partición exacta, a base de -empezando a partir de un modelo con
todos los constraints relajados, es decir, con las regiones más pequeñas posibleincrementalmente
añadir regiones más grandes a la aproximación. Los métodos
de inferencia aproximada que existen actualmente producen lÃmites superiores
ajustados de la función de partición, pero los lÃmites inferiores suelen ser demasiado
imprecisos o incluso triviales.
El objetivo de este proyecto es investigar estrategias de partición que mejoren
los lÃmites inferiores obtenidos con el algoritmo de mini-bucket, trabajando dentro
del marco de trabajo de mini-bucket incremental.
Empezamos a partir de la idea de que creemos que deberÃa ser beneficioso
razonar conjuntamente con las variables de un modelo que tienen una alta correlación,
y desarrollamos una estrategia para la selección de regiones basada en
esa idea. Posteriormente, implementamos nuestra estrategia y exploramos formas
de mejorarla, y finalmente medimos los resultados obtenidos usando nuestra
estrategia y los comparamos con varios métodos de referencia.
Nuestros resultados indican que nuestra estrategia obtiene lÃmites inferiores
más ajustados que nuestros dos métodos de referencia. También consideramos
y descartamos dos posibles hipótesis que podrÃan explicar esta mejora.Els models en graf probabilÃstics, com bé els camps aleatoris de Markov i les
xarxes bayesianes, ofereixen poderosos marcs de treball per la representació
del coneixement i el raonament en models amb grans quantitats de variables.
Tanmateix, els problemes d’inferència exacta en models de grafs son NP-hard
en general, el qual ha provocat que es produeixi bastant d’interès en mètodes
d’inferència aproximats.
El mini-bucket incremental es un marc de treball per a l’inferència aproximada
que produeix com a resultat lÃmits aproximats inferior i superior de la
funció de partició exacta que funciona començant a partir d’un model al qual
se li han relaxat tots els constraints -és a dir, un model amb les regions més
petites possibles- i anar afegint a l’aproximació regions incrementalment més
grans. Els mètodes d’inferència aproximada que existeixen actualment produeixen
lÃmits superiors ajustats de la funció de partició. Tanmateix, els lÃmits
inferiors acostumen a ser massa imprecisos o fins aviat trivials.
El objectiu d’aquest projecte es recercar estratègies de partició que millorin
els lÃmits inferiors obtinguts amb l’algorisme de mini-bucket, treballant dins del
marc de treball del mini-bucket incremental.
La nostra idea de partida pel projecte es que creiem que hauria de ser beneficiós
per la qualitat de l’aproximació raonar conjuntament amb les variables del
model que tenen una alta correlació entre elles, i desenvolupem una estratègia
per a la selecció de regions basada en aquesta idea. Posteriorment, implementem
la nostra estratègia i explorem formes de millorar-la, i finalment mesurem els
resultats obtinguts amb la nostra estratègia i els comparem a diversos mètodes
de referència.
Els nostres resultats indiquen que la nostra estratègia obté lÃmits inferiors
més ajustats que els nostres dos mètodes de referència. També considerem i
descartem dues possibles hipòtesis que podrien explicar aquesta millora.Probabilistic graphical models such as Markov random fields and Bayesian networks
provide powerful frameworks for knowledge representation and reasoning
over models with large numbers of variables. Unfortunately, exact inference
problems on graphical models are generally NP-hard, which has led to signifi-
cant interest in approximate inference algorithms.
Incremental mini-bucket is a framework for approximate inference that provides
upper and lower bounds on the exact partition function by, starting from
a model with completely relaxed constraints, i.e. with the smallest possible
regions, incrementally adding larger regions to the approximation. Current
approximate inference algorithms provide tight upper bounds on the exact partition
function but loose or trivial lower bounds.
This project focuses on researching partitioning strategies that improve the
lower bounds obtained with mini-bucket elimination, working within the framework
of incremental mini-bucket.
We start from the idea that variables that are highly correlated should be
reasoned about together, and we develop a strategy for region selection based
on that idea. We implement the strategy and explore ways to improve it, and
finally we measure the results obtained using the strategy and compare them to
several baselines.
We find that our strategy performs better than both of our baselines. We
also rule out several possible explanations for the improvement
Statistical Zero Knowledge and quantum one-way functions
One-way functions are a very important notion in the field of classical
cryptography. Most examples of such functions, including factoring, discrete
log or the RSA function, can be, however, inverted with the help of a quantum
computer. In this paper, we study one-way functions that are hard to invert
even by a quantum adversary and describe a set of problems which are good such
candidates. These problems include Graph Non-Isomorphism, approximate Closest
Lattice Vector and Group Non-Membership. More generally, we show that any hard
instance of Circuit Quantum Sampling gives rise to a quantum one-way function.
By the work of Aharonov and Ta-Shma, this implies that any language in
Statistical Zero Knowledge which is hard-on-average for quantum computers,
leads to a quantum one-way function. Moreover, extending the result of
Impagliazzo and Luby to the quantum setting, we prove that quantum
distributionally one-way functions are equivalent to quantum one-way functions.
Last, we explore the connections between quantum one-way functions and the
complexity class QMA and show that, similarly to the classical case, if any of
the above candidate problems is QMA-complete then the existence of quantum
one-way functions leads to the separation of QMA and AvgBQP.Comment: 20 pages; Computational Complexity, Cryptography and Quantum Physics;
Published version, main results unchanged, presentation improve
- …