523 research outputs found

    Attentive Tensor Product Learning

    Full text link
    This paper proposes a new architecture - Attentive Tensor Product Learning (ATPL) - to represent grammatical structures in deep learning models. ATPL is a new architecture to bridge this gap by exploiting Tensor Product Representations (TPR), a structured neural-symbolic model developed in cognitive science, aiming to integrate deep learning with explicit language structures and rules. The key ideas of ATPL are: 1) unsupervised learning of role-unbinding vectors of words via TPR-based deep neural network; 2) employing attention modules to compute TPR; and 3) integration of TPR with typical deep learning architectures including Long Short-Term Memory (LSTM) and Feedforward Neural Network (FFNN). The novelty of our approach lies in its ability to extract the grammatical structure of a sentence by using role-unbinding vectors, which are obtained in an unsupervised manner. This ATPL approach is applied to 1) image captioning, 2) part of speech (POS) tagging, and 3) constituency parsing of a sentence. Experimental results demonstrate the effectiveness of the proposed approach

    The Rhetorical Algorithm: WikiLeaks and the Elliptical Secrets of Donald J. Trump

    Get PDF
    Algorithms were a generative force behind many of the leaks and secrets that dominated the 2016 election season. Taking the form of the identity-anonymizing Tor software that protected the identity of leakers, mathematical protocols occupied a prominent place in the secrets generated during the presidential campaign. This essay suggests that the rhetorical trope of ellipsis offers an equally crucial, algorithmic formula for explaining the public production of these secrets and leaks. It then describes the 2016 DNC leak and Donald Trump’s “I love Wikileaks” moment using the trope of ellipsis, which marks a discursive omission or gap in official executive discourse

    Constrained Polymorphic Types for a Calculus with Name Variables

    Get PDF
    We extend the simply-typed lambda-calculus with a mechanism for dynamic rebinding of code based on parametric nominal interfaces. That is, we introduce values which represent single fragments, or families of named fragments, of open code, where free variables are associated with names which do not obey alpha-equivalence. In this way, code fragments can be passed as function arguments and manipulated, through their nominal interface, by operators such as rebinding, overriding and renaming. Moreover, by using name variables, it is possible to write terms which are parametric in their nominal interface and/or in the way it is adapted, greatly enhancing expressivity. However, in order to prevent conflicts when instantiating name variables, the name-polymorphic types of such terms need to be equipped with simple {inequality} constraints. We show soundness of the type system

    Hierarchically-coupled hidden Markov models for learning kinetic rates from single-molecule data

    Full text link
    We address the problem of analyzing sets of noisy time-varying signals that all report on the same process but confound straightforward analyses due to complex inter-signal heterogeneities and measurement artifacts. In particular we consider single-molecule experiments which indirectly measure the distinct steps in a biomolecular process via observations of noisy time-dependent signals such as a fluorescence intensity or bead position. Straightforward hidden Markov model (HMM) analyses attempt to characterize such processes in terms of a set of conformational states, the transitions that can occur between these states, and the associated rates at which those transitions occur; but require ad-hoc post-processing steps to combine multiple signals. Here we develop a hierarchically coupled HMM that allows experimentalists to deal with inter-signal variability in a principled and automatic way. Our approach is a generalized expectation maximization hyperparameter point estimation procedure with variational Bayes at the level of individual time series that learns an single interpretable representation of the overall data generating process.Comment: 9 pages, 5 figure

    Inferring Chemical Reaction Patterns Using Rule Composition in Graph Grammars

    Get PDF
    Modeling molecules as undirected graphs and chemical reactions as graph rewriting operations is a natural and convenient approach tom odeling chemistry. Graph grammar rules are most naturally employed to model elementary reactions like merging, splitting, and isomerisation of molecules. It is often convenient, in particular in the analysis of larger systems, to summarize several subsequent reactions into a single composite chemical reaction. We use a generic approach for composing graph grammar rules to define a chemically useful rule compositions. We iteratively apply these rule compositions to elementary transformations in order to automatically infer complex transformation patterns. This is useful for instance to understand the net effect of complex catalytic cycles such as the Formose reaction. The automatically inferred graph grammar rule is a generic representative that also covers the overall reaction pattern of the Formose cycle, namely two carbonyl groups that can react with a bound glycolaldehyde to a second glycolaldehyde. Rule composition also can be used to study polymerization reactions as well as more complicated iterative reaction schemes. Terpenes and the polyketides, for instance, form two naturally occurring classes of compounds of utmost pharmaceutical interest that can be understood as "generalized polymers" consisting of five-carbon (isoprene) and two-carbon units, respectively

    Identification of Potential Ligands of the Main Protease of Coronavirus SARS-CoV-2 (2019-nCoV) Using Multimodal Generative Neural-Networks

    Get PDF
    The recent outbreak of coronavirus disease 2019 (COVID-19) is posing a global threat to human population. The pandemic caused by novel coronavirus (2019-nCoV), also called as severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2); first emerged in Wuhan city, Hubei province of China in December 2019. The rapid human to human transmission has caused the contagion to spread world-wide affecting 244,385,444 (244.4 million) people globally causing 4,961,489 (5 million) fatalities dated by 27 October 2021. At present, 6,697,607,393 (6.7 billion) vaccine doses have been administered dated by 27 October 2021, for the prevention of COVID-19 infections. Even so, this critical and threatening situation of pandemic and due to various variants’ emergence, the pandemic control has become challenging; this calls for gigantic efforts to find new potent drug candidates and effective therapeutic approaches against the virulent respiratory disease of COVID-19. In the respiratory morbidities of COVID-19, the functionally crucial drug target for the antiviral treatment could be the main protease/3-chymotrypsin protease (Mpro/3CLpro) enzyme that is primarily involved in viral maturation and replication. In view of this, in the current study I have designed a library of small molecules against the main protease (Mpro) of coronavirus SARS-CoV-2 (2019-nCoV) by using multimodal generative neural-networks. The scaffold-based molecular docking of the series of compounds at the active site of the protein was performed; binding poses of the molecules were evaluated and protein-ligand interaction studies followed by the binding affinity calculations validated the findings. I have identified a number of small promising lead compounds that could serve as potential inhibitors of the main protease (Mpro) enzyme of coronavirus SARS-CoV-2 (2019-nCoV). This study would serve as a step forward in the development of effective antiviral therapeutic agents against the COVID-19

    A Defense of Pure Connectionism

    Full text link
    Connectionism is an approach to neural-networks-based cognitive modeling that encompasses the recent deep learning movement in artificial intelligence. It came of age in the 1980s, with its roots in cybernetics and earlier attempts to model the brain as a system of simple parallel processors. Connectionist models center on statistical inference within neural networks with empirically learnable parameters, which can be represented as graphical models. More recent approaches focus on learning and inference within hierarchical generative models. Contra influential and ongoing critiques, I argue in this dissertation that the connectionist approach to cognitive science possesses in principle (and, as is becoming increasingly clear, in practice) the resources to model even the most rich and distinctly human cognitive capacities, such as abstract, conceptual thought and natural language comprehension and production. Consonant with much previous philosophical work on connectionism, I argue that a core principle—that proximal representations in a vector space have similar semantic values—is the key to a successful connectionist account of the systematicity and productivity of thought, language, and other core cognitive phenomena. My work here differs from preceding work in philosophy in several respects: (1) I compare a wide variety of connectionist responses to the systematicity challenge and isolate two main strands that are both historically important and reflected in ongoing work today: (a) vector symbolic architectures and (b) (compositional) vector space semantic models; (2) I consider very recent applications of these approaches, including their deployment on large-scale machine learning tasks such as machine translation; (3) I argue, again on the basis mostly of recent developments, for a continuity in representation and processing across natural language, image processing and other domains; (4) I explicitly link broad, abstract features of connectionist representation to recent proposals in cognitive science similar in spirit, such as hierarchical Bayesian and free energy minimization approaches, and offer a single rebuttal of criticisms of these related paradigms; (5) I critique recent alternative proposals that argue for a hybrid Classical (i.e. serial symbolic)/statistical model of mind; (6) I argue that defending the most plausible form of a connectionist cognitive architecture requires rethinking certain distinctions that have figured prominently in the history of the philosophy of mind and language, such as that between word- and phrase-level semantic content, and between inference and association

    VERSA: Verification, Execution and Rewrite System for ASCR

    Get PDF
    VERSA is a tool for the automated analysis of resource-bound real-time systems using the Algebra of Communicating Shared Resources (ACSR). This document serves as an introduction to the tool for beginning users, and as a reference for process and command syntax, examples of usage, and tables of operators, built-in functions and algebraic laws. Two detailed examples demonstrate the application of VERSA to cononical examples from the literature. This version of the VERSA user\u27s guide reflects the 95.09.10 version of the tool

    Role-based Runtime Model Synchronization

    Get PDF
    Model-driven Software Development (MDSD) promotes the use of multiple related models to realize a software system systematically. These models usually contain redundant information but are independently edited. This easily leads to inconsistencies among them. To ensure consistency among multiple models, model synchronizations have to be employed, e.g., by means of model transformations, trace links, or triple graph grammars. Model synchronization poses three main problems for MDSD. First, classical model synchronization approaches have to be manually triggered to perform the synchronization. However, to support the consistent evolution of multiple models, it is necessary to immediately and continuously update all of them. Second, synchronization rules are specified at design time and, in classic approaches, cannot be extended at runtime, which is necessary if metamodels evolve at runtime. Finally, most classical synchronization approaches focus on bilateral model synchronization, i.e., the synchronization between two models. Consequently, for more than two models, they require the definition of pairwise model synchronizations leading to a combinatorial explosion of synchronization rules. To remedy these issues, we propose a role-based approach for runtime model synchronization. In particular, we propose role-based synchronization rules that enable the immediate and continuous propagation of changes to multiple interrelated models (and back again). Additionally, our approach permits adding new and customized synchronization rules at runtime. We illustrate the benefits of role-based runtime model synchronization using the Families to Persons case study from the Transformation Tool Contest 2017
    • …
    corecore