37 research outputs found

    Polynomial Learnability of Semilinear Sets

    Get PDF
    We characterize learnability and non-learnability of subsets of Nm called \u27semilinear sets\u27, with respect to the distribution-free learning model of Valiant. In formal language terms, semilinear sets are exactly the class of \u27letter-counts\u27 (or Parikh-images) of regular sets. We show that the class of semilinear sets of dimensions 1 and 2 is learnable, when the integers are encoded in unary. We complement this result with negative results of several different sorts, relying on hardness assumptions of varying degrees - from P ≠ NP and RP ≠ NP to the hardness of learning DNF. We show that the minimal consistent concept problem is NP-complete for this class, verifying the non-triviality of our learnability result. We also show that with respect to the binary encoding of integers, the corresponding \u27prediction\u27 problem is already as hard as that of DNF, for a class of subsets of Nm much simpler than semilinear sets. The present work represents an interesting class of countably infinite concepts for which the questions of learnability have been nearly completely characterized. In doing so, we demonstrate how various proof techniques developed by Pitt and Valiant [14], Blumer et al. [3], and Pitt and Warmuth [16] can be fruitfully applied in the context of formal languages

    Learning semilinear sets from examples and via queries

    Get PDF
    AbstractSemilinear sets play an important role in parallel computation models such as matrix grammars, commutative grammars, and Petri nets. In this paper, we consider the problems of learning semilinear sets from examples and via queries. We shall show that (1) the family of semilinear sets is not learnable only from positive examples, while the family of linear sets is learnable only from positive examples, although the problem of learning linear sets from positive examples seems to be computationally intractable; (2) if for any unknown semilinear set Su and any conjectured semilinear set S′, queries whether or not Su⊆S′ and queries whether or not S′⊆Su can be made, there exists a learning procedure which identifies any semilinear set and halts, although the procedure is time-consuming; (3) however, under the same condition, for each fixed dimension, there exist meaningful subfamilies of semilinear sets learnable in polynomial time of the minimum size of representations and, in particular, for any variable dimension, if for any unknown linear set Lu and any conjectured semilinear set S′, queries whether or not Lu⊆S′ can be made, the family of linear sets is learnable in polynomial time of the minimum size of representations and the dimension

    Learning SECp Languages from Only Positive Data

    Get PDF
    The eld of Grammatical Inference provides a good theoretical framework for investigating a learning process. Formal results in this eld can be relevant to the question of rst language acquisition. However, Grammatical Inference studies have been focused mainly on mathematical aspects, and have not exploited the linguistic relevance of their results. With this paper, we try to enrich Grammatical Inference studies with ideas from Linguistics. We propose a non-classical mechanism that has relevant linguistic and computational properties, and we study its learnability from positive data

    Cumulative subject index Volumes 90–95

    Get PDF

    Polynomial Learnability and Locality of Formal Grammars

    Get PDF
    We apply a complexity theoretic notion of feasible learnability called polynomial learnability to the evaluation of grammatical formalisms for linguistic description. We show that a novel, nontrivial constraint on the degree of locality of grammars allows not only context free languages but also a rich class of mildly context sensitive languages to be polynomially learnable. We discuss possible implications of this result to the theory of natural language acquisition

    : Méthodes d'Inférence Symbolique pour les Bases de Données

    Get PDF
    This dissertation is a summary of a line of research, that I wasactively involved in, on learning in databases from examples. Thisresearch focused on traditional as well as novel database models andlanguages for querying, transforming, and describing the schema of adatabase. In case of schemas our contributions involve proposing anoriginal languages for the emerging data models of Unordered XML andRDF. We have studied learning from examples of schemas for UnorderedXML, schemas for RDF, twig queries for XML, join queries forrelational databases, and XML transformations defined with a novelmodel of tree-to-word transducers.Investigating learnability of the proposed languages required us toexamine closely a number of their fundamental properties, often ofindependent interest, including normal forms, minimization,containment and equivalence, consistency of a set of examples, andfinite characterizability. Good understanding of these propertiesallowed us to devise learning algorithms that explore a possibly largesearch space with the help of a diligently designed set ofgeneralization operations in search of an appropriate solution.Learning (or inference) is a problem that has two parameters: theprecise class of languages we wish to infer and the type of input thatthe user can provide. We focused on the setting where the user inputconsists of positive examples i.e., elements that belong to the goallanguage, and negative examples i.e., elements that do not belong tothe goal language. In general using both negative and positiveexamples allows to learn richer classes of goal languages than usingpositive examples alone. However, using negative examples is oftendifficult because together with positive examples they may cause thesearch space to take a very complex shape and its exploration may turnout to be computationally challenging.Ce mémoire est une courte présentation d’une direction de recherche, à laquelle j’ai activementparticipé, sur l’apprentissage pour les bases de données à partir d’exemples. Cette recherches’est concentrée sur les modèles et les langages, aussi bien traditionnels qu’émergents, pourl’interrogation, la transformation et la description du schéma d’une base de données. Concernantles schémas, nos contributions consistent en plusieurs langages de schémas pour les nouveaumodèles de bases de données que sont XML non-ordonné et RDF. Nous avons ainsi étudiél’apprentissage à partir d’exemples des schémas pour XML non-ordonné, des schémas pour RDF,des requêtes twig pour XML, les requêtes de jointure pour bases de données relationnelles et lestransformations XML définies par un nouveau modèle de transducteurs arbre-à-mot.Pour explorer si les langages proposés peuvent être appris, nous avons été obligés d’examinerde près un certain nombre de leurs propriétés fondamentales, souvent souvent intéressantespar elles-mêmes, y compris les formes normales, la minimisation, l’inclusion et l’équivalence, lacohérence d’un ensemble d’exemples et la caractérisation finie. Une bonne compréhension de cespropriétés nous a permis de concevoir des algorithmes d’apprentissage qui explorent un espace derecherche potentiellement très vaste grâce à un ensemble d’opérations de généralisation adapté àla recherche d’une solution appropriée.L’apprentissage (ou l’inférence) est un problème à deux paramètres : la classe précise delangage que nous souhaitons inférer et le type d’informations que l’utilisateur peut fournir. Nousnous sommes placés dans le cas où l’utilisateur fournit des exemples positifs, c’est-à-dire deséléments qui appartiennent au langage cible, ainsi que des exemples négatifs, c’est-à-dire qui n’enfont pas partie. En général l’utilisation à la fois d’exemples positifs et négatifs permet d’apprendredes classes de langages plus riches que l’utilisation uniquement d’exemples positifs. Toutefois,l’utilisation des exemples négatifs est souvent difficile parce que les exemples positifs et négatifspeuvent rendre la forme de l’espace de recherche très complexe, et par conséquent, son explorationinfaisable

    Optimization with learning-informed differential equation constraints and its applications

    Get PDF
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided

    Optimization with learning-informed differential equation constraints and its applications

    Get PDF
    Inspired by applications in optimal control of semilinear elliptic partial differential equations and physics-integrated imaging, differential equation constrained optimization problems with constituents that are only accessible through data-driven techniques are studied. A particular focus is on the analysis and on numerical methods for problems with machine-learned components. For a rather general context, an error analysis is provided, and particular properties resulting from artificial neural network based approximations are addressed. Moreover, for each of the two inspiring applications analytical details are presented and numerical results are provided

    Sample Size Estimates for Risk-Neutral Semilinear PDE-Constrained Optimization

    Full text link
    The sample average approximation (SAA) approach is applied to risk-neutral optimization problems governed by semilinear elliptic partial differential equations with random inputs. After constructing a compact set that contains the SAA critical points, we derive nonasymptotic sample size estimates for SAA critical points using the covering number approach. Thereby, we derive upper bounds on the number of samples needed to obtain accurate critical points of the risk-neutral PDE-constrained optimization problem through SAA critical points. We quantify accuracy using expectation and exponential tail bounds. Numerical illustrations are presented.Comment: 26 pages, 10 figure
    corecore