10 research outputs found
Learning DNF Expressions from Fourier Spectrum
Since its introduction by Valiant in 1984, PAC learning of DNF expressions
remains one of the central problems in learning theory. We consider this
problem in the setting where the underlying distribution is uniform, or more
generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed
that in this setting a DNF expression can be efficiently approximated from its
"heavy" low-degree Fourier coefficients alone. This is in contrast to previous
approaches where boosting was used and thus Fourier coefficients of the target
function modified by various distributions were needed. This property is
crucial for learning of DNF expressions over smoothed product distributions, a
learning model introduced by Kalai et al. (2009) and inspired by the seminal
smoothed analysis model of Spielman and Teng (2001).
We introduce a new approach to learning (or approximating) a polynomial
threshold functions which is based on creating a function with range [-1,1]
that approximately agrees with the unknown function on low-degree Fourier
coefficients. We then describe conditions under which this is sufficient for
learning polynomial threshold functions. Our approach yields a new, simple
algorithm for approximating any polynomial-size DNF expression from its "heavy"
low-degree Fourier coefficients alone. Our algorithm greatly simplifies the
proof of learnability of DNF expressions over smoothed product distributions.
We also describe an application of our algorithm to learning monotone DNF
expressions over product distributions. Building on the work of Servedio
(2001), we give an algorithm that runs in time \poly((s \cdot
\log{(s/\eps)})^{\log{(s/\eps)}}, n), where is the size of the target DNF
expression and \eps is the accuracy. This improves on \poly((s \cdot
\log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n) bound of Servedio
(2001).Comment: Appears in Conference on Learning Theory (COLT) 201
Exact Learning with Tunable Quantum Neural Networks and a Quantum Example Oracle
In this paper, we study the tunable quantum neural network architecture in
the quantum exact learning framework with access to a uniform quantum example
oracle. We present an approach that uses amplitude amplification to correctly
tune the network to the target concept. We applied our approach to the class of
positive -juntas and found that quantum examples are sufficient
with experimental results seemingly showing that a tighter upper bound is
possible
Distributional PAC-Learning from Nisan's Natural Proofs
Carmosino et al. (2016) demonstrated that natural proofs of circuit lower
bounds for imply efficient algorithms for learning
-circuits, but only over \textit{the uniform distribution}, with
\textit{membership queries}, and provided \AC^0[p] \subseteq \Lambda. We
consider whether this implication can be generalized to \Lambda \not\supseteq
\AC^0[p], and to learning algorithms which use only random examples and learn
over arbitrary example distributions (Valiant's PAC-learning model).
We first observe that, if, for any circuit class , there is an
implication from natural proofs for to PAC-learning for ,
then standard assumptions from lattice-based cryptography do not hold. In
particular, we observe that depth-2 majority circuits are a (conditional)
counter example to the implication, since Nisan (1993) gave a natural proof,
but Klivans and Sherstov (2009) showed hardness of PAC-learning under
lattice-based assumptions. We thus ask: what learning algorithms can we
reasonably expect to follow from Nisan's natural proofs?
Our main result is that all natural proofs arising from a type of
communication complexity argument, including Nisan's, imply PAC-learning
algorithms in a new \textit{distributional} variant (i.e., an ``average-case''
relaxation) of Valiant's PAC model. Our distributional PAC model is stronger
than the average-case prediction model of Blum et al. (1993) and the heuristic
PAC model of Nanashima (2021), and has several important properties which make
it of independent interest, such as being \textit{boosting-friendly}. The main
applications of our result are new distributional PAC-learning algorithms for
depth-2 majority circuits, polytopes and DNFs over natural target
distributions, as well as the nonexistence of encoded-input weak PRFs that can
be evaluated by depth-2 majority circuits.Comment: Added discussio
Learning Boolean functions with multi-controlled X gates
As of late, both the fields of quantum computing and machine learning have experienced simultaneous developments. It is thus naturally that the interplay between these two fields is being investigated with the hope that they could benefit from one another. In this thesis, we explore one of the facets of this union called quantum machine learning. More accurately, throughout this thesis, the aim will be to learn Boolean functions using quantum circuits.
To do so, we first study a type of circuit, that we named tunable quantum neural network, exclusively made of multi-controlled X gates and we formally show that this type of circuit is able to express any Boolean function, provided that it is tuned correctly. We then devise a learning algorithm, that makes use of a specific quantum superposition to identify misclassified inputs. This algorithm intends to minimise the number of updates to the quantum circuit as it can be a costly operation. However, because of the large number of measurements required, it may not be practical.
To tackle this limitation and to guide our design of a learning algorithm that is indeed practical, we take advantage of the still ongoing field of quantum learning theory and design two other learning algorithms to be used in their respective framework. The first algorithm is used to train the network in the quantum probably approximately correct (QPAC) learning framework. By leveraging a quantum procedure called amplitude amplification, we show that this algorithm is efficient. The second algorithm also uses amplitude amplification but this time to train the network in the quantum exact learning framework with access to a uniform quantum example oracle. In both frameworks, we show that, in some cases, our algorithms perform better than what can be found in the literature.Open Acces
Recommended from our members
On the Learnability of Monotone Functions
A longstanding lacuna in the field of computational learning theory is the learnability of succinctly representable monotone Boolean functions, i.e., functions that preserve the given order of the input. This thesis makes significant progress towards understanding both the possibilities and the limitations of learning various classes of monotone functions by carefully considering the complexity measures used to evaluate them. We show that Boolean functions computed by polynomial-size monotone circuits are hard to learn assuming the existence of one-way functions. Having shown the hardness of learning general polynomial-size monotone circuits, we show that the class of Boolean functions computed by polynomial-size depth-3 monotone circuits are hard to learn using statistical queries. As a counterpoint, we give a statistical query learning algorithm that can learn random polynomial-size depth-2 monotone circuits (i.e., monotone DNF formulas). As a preliminary step towards a fully polynomial-time, proper learning algorithm for learning polynomial-size monotone decision trees, we also show the relationship between the average depth of a monotone decision tree, its average sensitivity, and its variance. Finally, we return to monotone DNF formulas, and we show that they are teachable (a different model of learning) in the average case. We also show that non-monotone DNF formulas, juntas, and sparse GF2 formulas are teachable in the average case
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum