39 research outputs found

    UsingWomb Grammars for Inducing the Grammar of a Subset of Yorùbá Noun Phrases

    Get PDF
    We address the problem of inducing the grammar of an under-resourced language,Yorùbá, from the grammar of English using an efficient and, linguistically savvy, constraintsolving model of grammar induction –Womb Grammars (WG). Our proposed methodologyadapts WG for parsing a subset of noun phrases of the target language Yorùbá, from thegrammar of the source language English, which is described as properties between pairs ofconstituents. Our model is implemented in CHRG (Constraint Handling Rule Grammar) and,it has been used for inducing the grammar of a useful subset of Yorùbá Noun Phrases. Interestingextensions to the original Womb Grammar model are presented, motivated by the specificneeds of Yorùbá and, similar tone languages

    Blockchain Software Verification and Optimization

    Get PDF
    In the last decade, blockchain technology has undergone a strong evolution. The maturity reached and the consolidation obtained have aroused the interest of companies and businesses, transforming it into a possible response to various industrial needs. However, the lack of standards and tools for the development and maintenance of blockchain software leaves open challenges and various possibilities for improvements. The goal of this thesis is to tackle some of the challenges proposed by blockchain technology, to design and implement analysis, processes, and architectures that may be applied in the real world. In particular, two topics are addressed: the verification of the blockchain software and the code optimization of smart contracts. As regards the verification, the thesis focuses on the original developments of tools and analyses able to detect statically, i.e. without code execution, issues related to non-determinism, untrusted cross-contracts invocation, and numerical overflow/underflow. Moreover, an approach based on on-chain verification is investigated, to proactively involve the blockchain in verifying the code before and after its deployment. For the optimization side, the thesis describes an optimization process for the code translation from Solidity language to Takamaka, also proposing an efficient algorithm to compute snapshots for fungible and non-fungible tokens. The results of this thesis are an important first step towards improving blockchain software development, empirically demonstrating the applicability of the proposed approaches and their involvement also in the industrial field

    Efficient Analysis and Synthesis of Complex Quantitative Systems

    Get PDF

    A morphosyntactic processor of modern standard Arabic

    Get PDF

    Kant's cognitive architecture

    Get PDF
    Imagine a machine, equipped with sensors, receiving a stream of sensory information. It must, somehow, make sense of this stream of sensory data. But what, exactly, does this involve? We have an intuitive understanding of what is involved in “making sense” of sensory data – but can we specify precisely what is involved? Can this intuitive notion be formalized? In this thesis, we make three contributions. First, we provide a precise formalization of what it means to “make sense” of a sensory sequence. According to our definition, making sense means constructing a symbolic causal theory that explains the sensory sequence and satisfies a set of unity conditions that were inspired by Kant’s discussion in the first half of the Critique of Pure Reason. According to our interpretation, making sense of sensory input is a type of program synthesis, but it is unsupervised program synthesis. Our second contribution is a computer implementation, the Apperception Engine, that was designed to satisfy our requirements for making sense of a sensory sequence. Our system is able to produce interpretable human-readable causal theories from very small amounts of data, because of the strong inductive bias provided by the Kantian unity constraints. A causal theory produced by our system is able to predict future sensor readings, as well as retrodict earlier readings, and impute missing sensory readings. In fact, it is able to do all three tasks simultaneously. The engine is implemented in Answer Set Programming (ASP) and induces theories expressed in an extension of Datalog that includes causal rules and constraints. We test the engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction IQ tests. In each domain, we test our engine’s ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The Apperception Engine performs well in all these domains, significantly out-performing neural net baselines. These results are significant because neural nets typically struggle to solve the binding problem (where information from different modalities must somehow be combined together into different aspects of one unified object) and fail to solve occlusion tasks (in which objects are sometimes visible and sometimes obscured from view). We note in particular that in the sequence induction IQ tasks, our system achieves human-level performance. This is notable because the Apperception Engine was not designed to solve these IQ tasks; it is not a bespoke hand-engineered solution to this particular domain. – Rather, it is a general purpose system that attempts to make sense of any sensory sequence, that just happens to be able to solve these IQ tasks “out of the box”. Our third contribution is a major extension of the engine to handle noisy and ambiguous data. While the initial implementation assumes the sensory input has already been preprocessed into ground atoms of first-order logic, our extension makes sense of raw unprocessed input – a sequence of pixel images from a video camera, for example. The resulting system is a neuro-symbolic framework for distilling interpretable theories out of streams of raw, unprocessed sensory experience.Open Acces

    Unification-based constraints for statistical machine translation

    Get PDF
    Morphology and syntax have both received attention in statistical machine translation research, but they are usually treated independently and the historical emphasis on translation into English has meant that many morphosyntactic issues remain under-researched. Languages with richer morphologies pose additional problems and conventional approaches tend to perform poorly when either source or target language has rich morphology. In both computational and theoretical linguistics, feature structures together with the associated operation of unification have proven a powerful tool for modelling many morphosyntactic aspects of natural language. In this thesis, we propose a framework that extends a state-of-the-art syntax-based model with a feature structure lexicon and unification-based constraints on the target-side of the synchronous grammar. Whilst our framework is language-independent, we focus on problems in the translation of English to German, a language pair that has a high degree of syntactic reordering and rich target-side morphology. We first apply our approach to modelling agreement and case government phenomena. We use the lexicon to link surface form words with grammatical feature values, such as case, gender, and number, and we use constraints to enforce feature value identity for the words in agreement and government relations. We demonstrate improvements in translation quality of up to 0.5 BLEU over a strong baseline model. We then examine verbal complex production, another aspect of translation that requires the coordination of linguistic features over multiple words, often with long-range discontinuities. We develop a feature structure representation of verbal complex types, using constraint failure as an indicator of translation error and use this to automatically identify and quantify errors that occur in our baseline system. A manual analysis and classification of errors informs an extended version of the model that incorporates information derived from a parse of the source. We identify clause spans and use model features to encourage the generation of complete verbal complex types. We are able to improve accuracy as measured using precision and recall against values extracted from the reference test sets. Our framework allows for the incorporation of rich linguistic information and we present sketches of further applications that could be explored in future work

    Fine-Grained Linguistic Soft Constraints on Statistical Natural Language Processing Models

    Get PDF
    This dissertation focuses on effective combination of data-driven natural language processing (NLP) approaches with linguistic knowledge sources that are based on manual text annotation or word grouping according to semantic commonalities. I gainfully apply fine-grained linguistic soft constraints -- of syntactic or semantic nature -- on statistical NLP models, evaluated in end-to-end state-of-the-art statistical machine translation (SMT) systems. The introduction of semantic soft constraints involves intrinsic evaluation on word-pair similarity ranking tasks, extension from words to phrases, application in a novel distributional paraphrase generation technique, and an introduction of a generalized framework of which these soft semantic and syntactic constraints can be viewed as instances, and in which they can be potentially combined. Fine granularity is key in the successful combination of these soft constraints, in many cases. I show how to softly constrain SMT models by adding fine-grained weighted features, each preferring translation of only a specific syntactic constituent. Previous attempts using coarse-grained features yielded negative results. I also show how to softly constrain corpus-based semantic models of words (“distributional profiles”) to effectively create word-sense-aware models, by using semantic word grouping information found in a manually compiled thesaurus. Previous attempts, using hard constraints and resulting in aggregated, coarse-grained models, yielded lower gains. A novel paraphrase generation technique incorporating these soft semantic constraints is then also evaluated in a SMT system. This paraphrasing technique is based on the Distributional Hypothesis. The main advantage of this novel technique over current “pivoting” techniques for paraphrasing is the independence from parallel texts, which are a limited resource. The evaluation is done by augmenting translation models with paraphrase-based translation rules, where fine-grained scoring of paraphrase-based rules yields significantly higher gains. The model augmentation includes a novel semantic reinforcement component: In many cases there are alternative paths of generating a paraphrase-based translation rule. Each of these paths reinforces a dedicated score for the “goodness” of the new translation rule. This augmented score is then used as a soft constraint, in a weighted log-linear feature, letting the translation model learn how much to “trust” the paraphrase-based translation rules. The work reported here is the first to use distributional semantic similarity measures to improve performance of an end-to-end phrase-based SMT system. The unified framework for statistical NLP models with soft linguistic constraints enables, in principle, the combination of both semantic and syntactic constraints -- and potentially other constraints, too -- in a single SMT model

    Adaptation of High Performance and High Capacity Reconfigurable Systems to OpenCL Programming Environments

    Full text link
    [EN] In this work, we adapt a reconfigurable computer system based on FPGA technologies to OpenCL programming environments. The reconfigurable system is part of a compute prototype of the MANGO European project that includes 96 FPGAs. To optimize the use and to obtain its maximum performance, it is essential to adapt it to heterogeneous systems programming environments such as OpenCL, which simplifies its programming. In this work, all the necessary activities for correct implementation of the software and hardware layer required for its use in OpenCL will be carried out, as well as an evaluation of the performance obtained and the flexibility offered by the solution provided. This work has been performed during an internship of 5 months. The internship is linked to an agreement between UPV and UniNa (Università degli Studi di Napoli Federico II).[ES] En este trabajo se va a realizar la adaptación de un sistema reconfigurable de cómputo basado en tecnologías de FPGAs hacia entornos de programación en OpenCL. El sistema reconfigurable forma parte de un prototipo de cálculo del proyecto Europeo MANGO que incluye 96 FPGAs. Con el fin de optimizar el uso y de obtener sus máximas prestaciones, se hace imprescindible una adaptación a entornos de programación de sistemas heterogéneos como OpenCL, lo cual simplifica su programación y uso. En este trabajo se realizarán todas las actividades necesarias para una correcta implementación de la capa software y hardware necesaria para su uso en OpenCL así como una evaluación de las prestaciones obtenidas y de la flexibilidad ofrecida por la solución aportada. Este trabajo se ha llevado a término durante una estancia de cinco meses en la Universitat Politécnica de Valéncia. Esta estancia está vinculada a un acuerdo entre la Universitat Politécnica de Valéncia y la Università degli Studi di Napoli Federico IIRusso, D. (2020). Adaptation of High Performance and High Capacity Reconfigurable Systems to OpenCL Programming Environments. http://hdl.handle.net/10251/150393TFG

    SOCIAL INTERACTION ANALYSIS USING A MULTI-SENSOR APPROACH

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Foundations of Software Science and Computation Structures

    Get PDF
    This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.
    corecore