42 research outputs found
The Consistency dimension and distribution-dependent learning from queries
We prove a new combinatorial characterization of polynomial
learnability from equivalence queries, and state some of its
consequences relating the learnability of a class with the
learnability via equivalence and membership queries of its
subclasses obtained by restricting the instance space.
Then we propose and study two models of query learning in which there
is a probability distribution on the instance space, both as an
application of the tools developed from the combinatorial
characterization and as models of independent interest.Postprint (published version
Cryptographic Sensing
Is it possible to measure a physical object in a way that makes the measurement signals unintelligible to an external observer? Alternatively, can one learn a natural concept by using a contrived training set that makes the labeled examples useless without the line of thought that has led to their choice?
We initiate a study of ``cryptographic sensing\u27\u27 problems of this type, presenting definitions, positive and negative results, and directions for further research
Teaching, learning, and exploration
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1994.Includes bibliographical references (p. 81-85).by Yiqun Yin.Ph.D
An average-case depth hierarchy theorem for Boolean circuits
We prove an average-case depth hierarchy theorem for Boolean circuits over
the standard basis of , , and gates.
Our hierarchy theorem says that for every , there is an explicit
-variable Boolean function , computed by a linear-size depth- formula,
which is such that any depth- circuit that agrees with on fraction of all inputs must have size This
answers an open question posed by H{\aa}stad in his Ph.D. thesis.
Our average-case depth hierarchy theorem implies that the polynomial
hierarchy is infinite relative to a random oracle with probability 1,
confirming a conjecture of H{\aa}stad, Cai, and Babai. We also use our result
to show that there is no "approximate converse" to the results of Linial,
Mansour, Nisan and Boppana on the total influence of small-depth circuits, thus
answering a question posed by O'Donnell, Kalai, and Hatami.
A key ingredient in our proof is a notion of \emph{random projections} which
generalize random restrictions
Learning algorithms with applications to robot navigation and protein folding
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1996.Includes bibliographical references (leaves 109-117).by Mona Singh.Ph.D
Learning Possibilistic Logic Theories
Vi tar opp problemet med å lære tolkbare maskinlæringsmodeller fra usikker og manglende informasjon. Vi utvikler først en ny dyplæringsarkitektur, RIDDLE: Rule InDuction with Deep LEarning (regelinduksjon med dyp læring), basert på egenskapene til mulighetsteori. Med eksperimentelle resultater og sammenligning med FURIA, en eksisterende moderne metode for regelinduksjon, er RIDDLE en lovende regelinduksjonsalgoritme for å finne regler fra data. Deretter undersøker vi læringsoppgaven formelt ved å identifisere regler med konfidensgrad knyttet til dem i exact learning-modellen. Vi definerer formelt teoretiske rammer og viser forhold som må holde for å garantere at en læringsalgoritme vil identifisere reglene som holder i et domene. Til slutt utvikler vi en algoritme som lærer regler med tilhørende konfidensverdier i exact learning-modellen. Vi foreslår også en teknikk for å simulere spørringer i exact learning-modellen fra data. Eksperimenter viser oppmuntrende resultater for å lære et sett med regler som tilnærmer reglene som er kodet i data.We address the problem of learning interpretable machine learning models from uncertain and missing information. We first develop a novel deep learning architecture, named RIDDLE (Rule InDuction with Deep LEarning), based on properties of possibility theory. With experimental results and comparison with FURIA, a state of the art method, RIDDLE is a promising rule induction algorithm for finding rules from data. We then formally investigate the learning task of identifying rules with confidence degree associated to them in the exact learning model. We formally define theoretical frameworks and show conditions that must hold to guarantee that a learning algorithm will identify the rules that hold in a domain. Finally, we develop an algorithm that learns rules with associated confidence values in the exact learning model. We also propose a technique to simulate queries in the exact learning model from data. Experiments show encouraging results to learn a set of rules that approximate rules encoded in data.Doktorgradsavhandlin