667 research outputs found
Pre-Reduction Graph Products: Hardnesses of Properly Learning DFAs and Approximating EDP on DAGs
The study of graph products is a major research topic and typically concerns
the term , e.g., to show that . In this paper, we
study graph products in a non-standard form where is a
"reduction", a transformation of any graph into an instance of an intended
optimization problem. We resolve some open problems as applications.
(1) A tight -approximation hardness for the minimum
consistent deterministic finite automaton (DFA) problem, where is the
sample size. Due to Board and Pitt [Theoretical Computer Science 1992], this
implies the hardness of properly learning DFAs assuming (the
weakest possible assumption).
(2) A tight hardness for the edge-disjoint paths (EDP)
problem on directed acyclic graphs (DAGs), where denotes the number of
vertices.
(3) A tight hardness of packing vertex-disjoint -cycles for large .
(4) An alternative (and perhaps simpler) proof for the hardness of properly
learning DNF, CNF and intersection of halfspaces [Alekhnovich et al., FOCS 2004
and J. Comput.Syst.Sci. 2008]
Regular Languages meet Prefix Sorting
Indexing strings via prefix (or suffix) sorting is, arguably, one of the most
successful algorithmic techniques developed in the last decades. Can indexing
be extended to languages? The main contribution of this paper is to initiate
the study of the sub-class of regular languages accepted by an automaton whose
states can be prefix-sorted. Starting from the recent notion of Wheeler graph
[Gagie et al., TCS 2017]-which extends naturally the concept of prefix sorting
to labeled graphs-we investigate the properties of Wheeler languages, that is,
regular languages admitting an accepting Wheeler finite automaton.
Interestingly, we characterize this family as the natural extension of regular
languages endowed with the co-lexicographic ordering: when sorted, the strings
belonging to a Wheeler language are partitioned into a finite number of
co-lexicographic intervals, each formed by elements from a single Myhill-Nerode
equivalence class. Moreover: (i) We show that every Wheeler NFA (WNFA) with
states admits an equivalent Wheeler DFA (WDFA) with at most
states that can be computed in time. This is in sharp contrast with
general NFAs. (ii) We describe a quadratic algorithm to prefix-sort a proper
superset of the WDFAs, a -time online algorithm to sort acyclic
WDFAs, and an optimal linear-time offline algorithm to sort general WDFAs. By
contribution (i), our algorithms can also be used to index any WNFA at the
moderate price of doubling the automaton's size. (iii) We provide a
minimization theorem that characterizes the smallest WDFA recognizing the same
language of any input WDFA. The corresponding constructive algorithm runs in
optimal linear time in the acyclic case, and in time in the
general case. (iv) We show how to compute the smallest WDFA equivalent to any
acyclic DFA in nearly-optimal time.Comment: added minimization theorems; uploaded submitted version; New version
with new results (W-MH theorem, linear determinization), added author:
Giovanna D'Agostin
Minimizing nfa's and regular expressions
AbstractWe show inapproximability results concerning minimization of nondeterministic finite automata (nfa's) as well as of regular expressions relative to given nfa's, regular expressions or deterministic finite automata (dfa's).We show that it is impossible to efficiently minimize a given nfa or regular expression with n states, transitions, respectively symbols within the factor o(n), unless P=PSPACE. For the unary case, we show that for any δ>0 it is impossible to efficiently construct an approximately minimal nfa or regular expression within the factor n1−δ, unless P=NP.Our inapproximability results for a given dfa with n states are based on cryptographic assumptions and we show that any efficient algorithm will have an approximation factor of at least npoly(logn). Our setup also allows us to analyze the minimum consistent dfa problem
An in-principle super-polynomial quantum advantage for approximating combinatorial optimization problems
Combinatorial optimization - a field of research addressing problems that
feature strongly in a wealth of scientific and industrial contexts - has been
identified as one of the core potential fields of applicability of quantum
computers. It is still unclear, however, to what extent quantum algorithms can
actually outperform classical algorithms for this type of problems. In this
work, by resorting to computational learning theory and cryptographic notions,
we prove that quantum computers feature an in-principle super-polynomial
advantage over classical computers in approximating solutions to combinatorial
optimization problems. Specifically, building on seminal work by Kearns and
Valiant and introducing a new reduction, we identify special types of problems
that are hard for classical computers to approximate up to polynomial factors.
At the same time, we give a quantum algorithm that can efficiently approximate
the optimal solution within a polynomial factor. The core of the quantum
advantage discovered in this work is ultimately borrowed from Shor's quantum
algorithm for factoring. Concretely, we prove a super-polynomial advantage for
approximating special instances of the so-called integer programming problem.
In doing so, we provide an explicit end-to-end construction for advantage
bearing instances. This result shows that quantum devices have, in principle,
the power to approximate combinatorial optimization solutions beyond the reach
of classical efficient algorithms. Our results also give clear guidance on how
to construct such advantage-bearing problem instances.Comment: 5+13 pages, 5 figures, presentation improve
Resting state MEG oscillations show long-range temporal correlations of phase synchrony that break down during finger movement
The capacity of the human brain to interpret and respond to multiple temporal scales in its surroundings suggests that its internal interactions must also be able to operate over a broad temporal range. In this paper, we utilize a recently introduced method for characterizing the rate of change of the phase difference between MEG signals and use it to study the temporal structure of the phase interactions between MEG recordings from the left and right motor cortices during rest and during a finger-tapping task. We use the Hilbert transform to estimate moment-to-moment fluctuations of the phase difference between signals. After confirming the presence of scale-invariance we estimate the Hurst exponent using detrended fluctuation analysis (DFA). An exponent of >0.5 is indicative of long-range temporal correlations (LRTCs) in the signal. We find that LRTCs are present in the α/μ and β frequency bands of resting state MEG data. We demonstrate that finger movement disrupts LRTCs correlations, producing a phase relationship with a structure similar to that of Gaussian white noise. The results are validated by applying the same analysis to data with Gaussian white noise phase difference, recordings from an empty scanner and phase-shuffled time series. We interpret the findings through comparison of the results with those we obtained from an earlier study during which we adopted this method to characterize phase relationships within a Kuramoto model of oscillators in its sub-critical, critical, and super-critical synchronization states. We find that the resting state MEG from left and right motor cortices shows moment-to-moment fluctuations of phase difference with a similar temporal structure to that of a system of Kuramoto oscillators just prior to its critical level of coupling, and that finger tapping moves the system away from this pre-critical state toward a more random state
Approximate Learning of Limit-Average Automata
Limit-average automata are weighted automata on infinite words that use average to aggregate the weights seen in infinite runs. We study approximate learning problems for limit-average automata in two settings: passive and active. In the passive learning case, we show that limit-average automata are not PAC-learnable as samples must be of exponential-size to provide (with good probability) enough details to learn an automaton. We also show that the problem of finding an automaton that fits a given sample is NP-complete. In the active learning case, we show that limit-average automata can be learned almost-exactly, i.e., we can learn in polynomial time an automaton that is consistent with the target automaton on almost all words. On the other hand, we show that the problem of learning an automaton that approximates the target automaton (with perhaps fewer states) is NP-complete. The abovementioned results are shown for the uniform distribution on words. We briefly discuss learning over different distributions
- …