64 research outputs found
Inductive inference and computable numberings
AbstractIt has been previously observed that for many TxtEx-learnable computable families of computably enumerable (c.e. for short) sets all their computable numberings are evidently 0′-equivalent, i.e. are equivalent with respect to reductions computable in the halting problem. We show that this holds for all TxtEx-learnable computable families of c.e. sets, and prove that, in general, the converse is not true. In fact there is a computable family A of c.e. sets such that all computable numberings of A are computably equivalent and A is not TxtEx-learnable. Moreover, we construct a computable family of c.e. sets which is not TxtBC-learnable though all of its computable numberings are 0′-equivalent. We also give a natural example of a computable TxtBC-learnable family of c.e. sets which possesses non-0′-equivalent computable numberings. So, for the computable families of c.e. sets, the properties of TxtBC-learnability and 0′-equivalence of all computable numberings are independent
A generalized characterization of algorithmic probability
An a priori semimeasure (also known as "algorithmic probability" or "the
Solomonoff prior" in the context of inductive inference) is defined as the
transformation, by a given universal monotone Turing machine, of the uniform
measure on the infinite strings. It is shown in this paper that the class of a
priori semimeasures can equivalently be defined as the class of
transformations, by all compatible universal monotone Turing machines, of any
continuous computable measure in place of the uniform measure. Some
consideration is given to possible implications for the prevalent association
of algorithmic probability with certain foundational statistical principles
On the Invariance of G\"odel's Second Theorem with regard to Numberings
The prevalent interpretation of G\"odel's Second Theorem states that a
sufficiently adequate and consistent theory does not prove its consistency. It
is however not entirely clear how to justify this informal reading, as the
formulation of the underlying mathematical theorem depends on several arbitrary
formalisation choices. In this paper I examine the theorem's dependency
regarding G\"odel numberings. I introduce deviant numberings, yielding
provability predicates satisfying L\"ob's conditions, which result in provable
consistency sentences. According to the main result of this paper however,
these "counterexamples" do not refute the theorem's prevalent interpretation,
since once a natural class of admissible numberings is singled out, invariance
is maintained.Comment: Forthcoming in The Review of Symbolic Logi
Inductive inference of recursive functions: complexity bounds
This survey includes principal results on complexity
of inductive inference for recursively enumerable classes of total
recursive functions. Inductive inference is a process to find an
algorithm from sample computations. In the case when the given class
of functions is recursively enumerable it is easy to define a
natural complexity measure for the inductive inference, namely, the
worst-case mindchange number for the first n functions in the given
class. Surely, the complexity depends not only on the class, but
also on the numbering, i.e. which function is the first, which one
is the second, etc. It turns out that, if the result of inference is
Goedel number, then complexity of inference may vary between
log n+o(log2n ) and an arbitrarily slow recursive function. If the
result of the inference is an index in the numbering of the
recursively enumerable class, then the complexity may go up to
const-n. Additionally, effects previously found in the Kolmogorov
complexity theory are discovered in the complexity of inductive
inference as well
Classifying the Arithmetical Complexity of Teaching Models
This paper classifies the complexity of various teaching models by their
position in the arithmetical hierarchy. In particular, we determine the
arithmetical complexity of the index sets of the following classes: (1) the
class of uniformly r.e. families with finite teaching dimension, and (2) the
class of uniformly r.e. families with finite positive recursive teaching
dimension witnessed by a uniformly r.e. teaching sequence. We also derive the
arithmetical complexity of several other decision problems in teaching, such as
the problem of deciding, given an effective coding of all uniformly r.e. families, any such that
, any and , whether or not the
teaching dimension of with respect to is upper bounded
by .Comment: 15 pages in International Conference on Algorithmic Learning Theory,
201
Learning and consistency
In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1
Learning Recursive Functions Refutably
Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.
For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria
Tradeoffs in the inductive inference of nearly minimal size programs
Inductive inference machines are algorithmic devices which attempt to synthesize (in the limit) programs for a function while they examine more and more of the graph of the function. There are many possible criteria of success. We study the inference of nearly minimal size programs. Our principal results imply that nearly minimal size programs can be inferred (in the limit) without loss of inferring power provided we are willing to tolerate a finite, but not uniformly, bounded, number of anomalies in the synthesized programs. On the other hand, there is a severe reduction of inferring power in inferring nearly minimal size programs if the maximum number of anomalies allowed is any uniform constant. We obtain a general characterization for the classes of recursive functions which can be synthesized by inferring nearly minimal size programs with anomalies. We also obtain similar results for Popperian inductive inference machines. The exact tradeoffs between mind change bounds on inductive inference machines and anomalies in synthesized programs are obtained. The techniques of recursive function theory including the recursion theorem are employed
Measuring Learning Complexity with Criteria Epitomizers
In prior papers, beginning with the seminal work by Freivalds et al. 1995, the notion of intrinsic complexity is used to analyze the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions -- sets, which are learnable under a given learning criterion, but not under other criteria which are not at least as powerful.
To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain classes of operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing
sets to be epitomizers, and they are, then, employed to prove several sets to be epitomizing.
Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are so-called self-learning sets, previously applied by Case & Koetzing, 2010. These strong epitomizers can be generated and employed in a myriad of settings to witness the strict separation in learning power between the criteria so epitomized and other not as powerful criteria
- …