7,232 research outputs found

    Polynomial Learnability of Semilinear Sets

    Get PDF
    We characterize learnability and non-learnability of subsets of Nm called \u27semilinear sets\u27, with respect to the distribution-free learning model of Valiant. In formal language terms, semilinear sets are exactly the class of \u27letter-counts\u27 (or Parikh-images) of regular sets. We show that the class of semilinear sets of dimensions 1 and 2 is learnable, when the integers are encoded in unary. We complement this result with negative results of several different sorts, relying on hardness assumptions of varying degrees - from P ≠ NP and RP ≠ NP to the hardness of learning DNF. We show that the minimal consistent concept problem is NP-complete for this class, verifying the non-triviality of our learnability result. We also show that with respect to the binary encoding of integers, the corresponding \u27prediction\u27 problem is already as hard as that of DNF, for a class of subsets of Nm much simpler than semilinear sets. The present work represents an interesting class of countably infinite concepts for which the questions of learnability have been nearly completely characterized. In doing so, we demonstrate how various proof techniques developed by Pitt and Valiant [14], Blumer et al. [3], and Pitt and Warmuth [16] can be fruitfully applied in the context of formal languages

    Benchmarking Compositionality with Formal Languages

    Get PDF
    Recombining known primitive concepts into larger novel combinations is a quintessentially human cognitive capability. Whether large neural models in NLP can acquire this ability while learning from data is an open question. In this paper, we investigate this problem from the perspective of formal languages. We use deterministic finite-state transducers to make an unbounded number of datasets with controllable properties governing compositionality. By randomly sampling over many transducers, we explore which of their properties contribute to learnability of a compositional relation by a neural network. We find that the models either learn the relations completely or not at all. The key is transition coverage, setting a soft learnability limit at 400 examples per transition

    What is usability in the context of the digital library and how can it be measured?

    Get PDF
    This paper reviews how usability has been defined in the context of the digital library, what methods have been applied and their applicability, and proposes an evaluation model and a suite of instruments for evaluating usability for academic digital libraries. The model examines effectiveness, efficiency, satisfaction, and learnability. It is found that there exists an interlocking relationship among effectiveness, efficiency, and satisfaction. It also examines how learnability interacts with these three attributes

    Towards unsupervised ontology learning from data

    Get PDF
    Data-driven elicitation of ontologies from structured data is a well-recognized knowledge acquisition bottleneck. The development of efficient techniques for (semi-)automating this task is therefore practically vital - yet, hindered by the lack of robust theoretical foundations. In this paper, we study the problem of learning Description Logic TBoxes from interpretations, which naturally translates to the task of ontology learning from data.In the presented framework, the learner is provided with a set of positive interpretations (i.e., logical models) of the TBox adopted by the teacher. The goal is to correctly identify the TBox given this input. We characterize the key constraints on the models that warrant finite learnability of TBoxes expressed in selected fragments of the Description Logic ε λ and define corresponding learning algorithms.This work was funded in part by the National Research Foundation under Grant no. 85482

    Pac-Learning Recursive Logic Programs: Efficient Algorithms

    Full text link
    We present algorithms that learn certain classes of function-free recursive logic programs in polynomial time from equivalence queries. In particular, we show that a single k-ary recursive constant-depth determinate clause is learnable. Two-clause programs consisting of one learnable recursive clause and one constant-depth determinate non-recursive clause are also learnable, if an additional ``basecase'' oracle is assumed. These results immediately imply the pac-learnability of these classes. Although these classes of learnable recursive programs are very constrained, it is shown in a companion paper that they are maximally general, in that generalizing either class in any natural way leads to a computationally difficult learning problem. Thus, taken together with its companion paper, this paper establishes a boundary of efficient learnability for recursive logic programs.Comment: See http://www.jair.org/ for any accompanying file

    Complexity Metrics for Systems Development Methods and Techniques

    Get PDF
    So many systems development methods have been introduced in the last decade that one can talk about a ¿methodology jungle¿. To aid the method developers and evaluators in fighting their way through this jungle, we propose a systematic approach for measuring properties of methods. We describe two sets of metrics which measure the complexity of single diagram techniques, and of complete systems development methods. The proposed metrics provide a relatively fast and simple way to analyse the descriptive capabilities of a technique or method. When accompanied with other selection criteria, the metrics can be used for estimating the relative complexity of a technique compared to others. To demonstrate the applicability of the metrics, we have applied them to 36 techniques and 11 methods

    Methodological development

    Get PDF
    Book description: Human-Computer Interaction draws on the fields of computer science, psychology, cognitive science, and organisational and social sciences in order to understand how people use and experience interactive technology. Until now, researchers have been forced to return to the individual subjects to learn about research methods and how to adapt them to the particular challenges of HCI. This is the first book to provide a single resource through which a range of commonly used research methods in HCI are introduced. Chapters are authored by internationally leading HCI researchers who use examples from their own work to illustrate how the methods apply in an HCI context. Each chapter also contains key references to help researchers find out more about each method as it has been used in HCI. Topics covered include experimental design, use of eyetracking, qualitative research methods, cognitive modelling, how to develop new methodologies and writing up your research

    Towards a Law of Invariance in Human Concept Learning

    Get PDF
    Invariance principles underlie many key theories in modern science. They provide the explanatory and predictive framework necessary for the rigorous study of natural phenomena ranging from the structure of crystals, to magnetism, to relativistic mechanics. Vigo (2008, 2009)introduced a new general notion and principle of invariance from which two parameter-free (ratio and exponential) models were derived to account for human conceptual behavior. Here we introduce a new parameterized \ud exponential “law” based on the same invariance principle. The law accurately predicts the subjective degree of difficulty that humans experience when learning different types of concepts. In addition, it precisely fits the data from a large-scale experiment which examined a total of 84 category structures across 10 category families (R-Squared =.97, p < .0001; r= .98, p < .0001). Moreover, it overcomes seven key challenges that had, hitherto, been grave obstacles for theories of concept learning
    corecore