867,266 research outputs found
Learning Moore Machines from Input-Output Traces
The problem of learning automata from example traces (but no equivalence or
membership queries) is fundamental in automata learning theory and practice. In
this paper we study this problem for finite state machines with inputs and
outputs, and in particular for Moore machines. We develop three algorithms for
solving this problem: (1) the PTAP algorithm, which transforms a set of
input-output traces into an incomplete Moore machine and then completes the
machine with self-loops; (2) the PRPNI algorithm, which uses the well-known
RPNI algorithm for automata learning to learn a product of automata encoding a
Moore machine; and (3) the MooreMI algorithm, which directly learns a Moore
machine using PTAP extended with state merging. We prove that MooreMI has the
fundamental identification in the limit property. We also compare the
algorithms experimentally in terms of the size of the learned machine and
several notions of accuracy, introduced in this paper. Finally, we compare with
OSTIA, an algorithm that learns a more general class of transducers, and find
that OSTIA generally does not learn a Moore machine, even when fed with a
characteristic sample
A Learning Algorithm based on High School Teaching Wisdom
A learning algorithm based on primary school teaching and learning is
presented. The methodology is to continuously evaluate a student and to give
them training on the examples for which they repeatedly fail, until, they can
correctly answer all types of questions. This incremental learning procedure
produces better learning curves by demanding the student to optimally dedicate
their learning time on the failed examples. When used in machine learning, the
algorithm is found to train a machine on a data with maximum variance in the
feature space so that the generalization ability of the network improves. The
algorithm has interesting applications in data mining, model evaluations and
rare objects discovery
Recommended from our members
A survey of induction algorithms for machine learning
Central to all systems for machine learning from examples is an induction algorithm. The purpose of the algorithm is to generalize from a finite set of training examples a description consistent with the examples seen, and, hopefully, with the potentially infinite set of examples not seen. This paper surveys four machine learning induction algorithms. The knowledge representation schemes and a PDL description of algorithm control are emphasized. System characteristics that are peculiar to a domain of application are de-emphasized. Finally, a comparative summary of the learning algorithms is presented
Strategy for quantum algorithm design assisted by machine learning
We propose a method for quantum algorithm design assisted by machine
learning. The method uses a quantum-classical hybrid simulator, where a
"quantum student" is being taught by a "classical teacher." In other words, in
our method, the learning system is supposed to evolve into a quantum algorithm
for a given problem assisted by classical main-feedback system. Our method is
applicable to design quantum oracle-based algorithm. As a case study, we chose
an oracle decision problem, called a Deutsch-Jozsa problem. We showed by using
Monte-Carlo simulations that our simulator can faithfully learn quantum
algorithm to solve the problem for given oracle. Remarkably, learning time is
proportional to the square root of the total number of parameters instead of
the exponential dependance found in the classical machine learning based
method.Comment: published versio
Supervised Quantum Learning without Measurements
We propose a quantum machine learning algorithm for efficiently solving a
class of problems encoded in quantum controlled unitary operations. The central
physical mechanism of the protocol is the iteration of a quantum time-delayed
equation that introduces feedback in the dynamics and eliminates the necessity
of intermediate measurements. The performance of the quantum algorithm is
analyzed by comparing the results obtained in numerical simulations with the
outcome of classical machine learning methods for the same problem. The use of
time-delayed equations enhances the toolbox of the field of quantum machine
learning, which may enable unprecedented applications in quantum technologies
Quantum adiabatic machine learning by zooming into a region of the energy surface
Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, an algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the receiver operating characteristic curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks
- …
