886 research outputs found

    Accepting grammars and systems

    Get PDF
    We investigate several kinds of regulated rewriting (programmed, matrix, with regular control, ordered, and variants thereof) and of parallel rewriting mechanisms (Lindenmayer systems, uniformly limited Lindenmayer systems, limited Lindenmayer systems and scattered context grammars) as accepting devices, in contrast with the usual generating mode. In some cases, accepting mode turns out to be just as powerful as generating mode, e.g. within the grammars of the Chomsky hierarchy, within random context, regular control, L systems, uniformly limited L systems, scattered context. Most of these equivalences can be proved using a metatheorem on so-called context condition grammars. In case of matrix grammars and programmed grammars without appearance checking, a straightforward construction leads to the desired equivalence result. Interestingly, accepting devices are (strictly) more powerful than their generating counterparts in case of ordered grammars, programmed and matrix grammars with appearance checking (even programmed grammarsm with unconditional transfer), and 1lET0L systems. More precisely, if we admit erasing productions, we arrive at new characterizations of the recursivley enumerable languages, and if we do not admit them, we get new characterizations of the context-sensitive languages. Moreover, we supplement the published literature showing: - The emptiness and membership problems are recursivley solvable for generating ordered grammars, even if we admit erasing productions. - Uniformly limited propagating systems can be simulated by programmed grammars without erasing and without appearance checking, hence the emptiness and membership problems are recursively solvable for such systems. - We briefly discuss the degree of nondeterminism and the degree of synchronization for devices with limited parallelism

    Membership for limited ET0L languages is not decidable

    Get PDF
    In this paper, we show how to encode arbitrary enumerable set of numbers given by register machines within limited EPT0L systems and programmed grammars with unconditional transfer.This result has various consequences, e.g.the existence of nonrecursive sets generable by 1lET0L systems or by programmed grammars with unconditional transfer. Moreover, ordered grammars are strictly less powerful than 1lET0L systems

    Asynchronous logic automata

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 89-92).Numerous applications, from high-performance scientific computing to large, high-resolution multi-touch interfaces to strong artificial intelligence, push the practical physical limits of modern computers. Typical computers attempt to hide the physics as much as possible, running software composed of a series of instructions drawn from an arbitrary set to be executed upon data that can be accessed uniformly. However, we submit that by exposing, rather than hiding, the density and velocity of information and the spatially concurrent, asynchronous nature of logic, scaling down in size and up in complexity becomes significantly easier. In particular, we introduce "asynchronous logic automata", which are a specialization of both asynchronous cellular automata and Petri nets, and include Boolean logic primitives in each cell. We also show some example algorithms, means to create circuits, potential hardware implementations, and comparisons to similar models in past practice.by David Allen Dalrymple.S.M

    Baghera Assessment Project, designing an hybrid and emergent educational society

    Get PDF
    Edited by Sophie Soury-Lavergne ; Available at: http://www-leibniz.imag.fr/LesCahiers/2003/Cahier81/BAP_CahiersLaboLeibniz.PDFResearch reportThe Baghera Assessment Project (BAP) has the objective to ex plore a new avenue for the design of e-Learning environments. The key features of BAP's approach are: (i) the concept of emergence in multi-agents systems as modelling framework, (ii) the shaping of a new theoretic al framework for modelling student knowledge, namely the cK¢ model. This new model has been constructed, based on the current research in cognitive science and education, to bridge research on education and research on the design of learning environments

    A Neurocomputational Model of Grounded Language Comprehension and Production at the Sentence Level

    Get PDF
    While symbolic and statistical approaches to natural language processing have become undeniably impressive in recent years, such systems still display a tendency to make errors that are inscrutable to human onlookers. This disconnect with human processing may stem from the vast differences in the substrates that underly natural language processing in artificial systems versus biological systems. To create a more relatable system, this dissertation turns to the more biologically inspired substrate of neural networks, describing the design and implementation of a model that learns to comprehend and produce language at the sentence level. The model's task is to ground simulated speech streams, representing a simple subset of English, in terms of a virtual environment. The model learns to understand and answer full-sentence questions about the environment by mimicking the speech stream of another speaker, much as a human language learner would. It is the only known neural model to date that can learn to map natural language questions to full-sentence natural language answers, where both question and answer are represented sublexically as phoneme sequences. The model addresses important points for which most other models, neural and otherwise, fail to account. First, the model learns to ground its linguistic knowledge using human-like sensory representations, gaining language understanding at a deeper level than that of syntactic structure. Second, analysis provides evidence that the model learns combinatorial internal representations, thus gaining the compositionality of symbolic approaches to cognition, which is vital for computationally efficient encoding and decoding of meaning. The model does this while retaining the fully distributed representations characteristic of neural networks, providing the resistance to damage and graceful degradation that are generally lacking in symbolic and statistical approaches. Finally, the model learns via direct imitation of another speaker, allowing it to emulate human processing with greater fidelity, thus increasing the relatability of its behavior. Along the way, this dissertation develops a novel training algorithm that, for the first time, requires only local computations to train arbitrary second-order recurrent neural networks. This algorithm is evaluated on its overall efficacy, biological feasibility, and ability to reproduce peculiarities of human learning such as age-correlated effects in second language acquisition

    Sequence-to-sequence learning for machine translation and automatic differentiation for machine learning software tools

    Full text link
    Cette thèse regroupe des articles d'apprentissage automatique et s'articule autour de deux thématiques complémentaires. D'une part, les trois premiers articles examinent l'application des réseaux de neurones artificiels aux problèmes du traitement automatique du langage naturel (TALN). Le premier article introduit une structure codificatrice-décodificatrice avec des réseaux de neurones récurrents pour traduire des segments de phrases de longueur variable. Le deuxième article analyse la performance de ces modèles de `traduction neuronale automatique' de manière qualitative et quantitative, tout en soulignant les difficultés posées par les phrases longues et les mots rares. Le troisième article s'adresse au traitement des mots rares et hors du vocabulaire commun en combinant des algorithmes de compression par dictionnaire et des réseaux de neurones récurrents. D'autre part, la deuxième partie de cette thèse fait abstraction de modèles particuliers de réseaux de neurones afin d'aborder l'infrastructure logicielle nécessaire à leur définition et entraînement. Les infrastructures modernes d'apprentissage profond doivent avoir la capacité d'exécuter efficacement des programmes d'algèbre linéaire et par tableaux, tout en étant capable de différentiation automatique (DA) pour calculer des dérivées multiples. Le premier article aborde les défis généraux posés par la conciliation de ces deux objectifs et propose la solution d'une représentation intermédiaire fondée sur les graphes. Le deuxième article attaque le même problème d'une manière différente: en implémentant un code source par bande dans un langage de programmation dynamique par tableau (Python et NumPy).This thesis consists of a series of articles that contribute to the field of machine learning. In particular, it covers two distinct and loosely related fields. The first three articles consider the use of neural network models for problems in natural language processing (NLP). The first article introduces the use of an encoder-decoder structure involving recurrent neural networks (RNNs) to translate from and to variable length phrases and sentences. The second article contains a quantitative and qualitative analysis of the performance of these `neural machine translation' models, laying bare the difficulties posed by long sentences and rare words. The third article deals with handling rare and out-of-vocabulary words in neural network models by using dictionary coder compression algorithms and multi-scale RNN models. The second half of this thesis does not deal with specific neural network models, but with the software tools and frameworks that can be used to define and train them. Modern deep learning frameworks need to be able to efficiently execute programs involving linear algebra and array programming, while also being able to employ automatic differentiation (AD) in order to calculate a variety of derivatives. The first article provides an overview of the difficulties posed in reconciling these two objectives, and introduces a graph-based intermediate representation that aims to tackle these difficulties. The second article considers a different approach to the same problem, implementing a tape-based source-code transformation approach to AD on a dynamically typed array programming language (Python and NumPy)
    corecore