1,719,857 research outputs found

    Target Directed Event Sequence Generation for Android Applications

    Full text link
    Testing is a commonly used approach to ensure the quality of software, of which model-based testing is a hot topic to test GUI programs such as Android applications (apps). Existing approaches mainly either dynamically construct a model that only contains the GUI information, or build a model in the view of code that may fail to describe the changes of GUI widgets during runtime. Besides, most of these models do not support back stack that is a particular mechanism of Android. Therefore, this paper proposes a model LATTE that is constructed dynamically with consideration of the view information in the widgets as well as the back stack, to describe the transition between GUI widgets. We also propose a label set to link the elements of the LATTE model to program snippets. The user can define a subset of the label set as a target for the testing requirements that need to cover some specific parts of the code. To avoid the state explosion problem during model construction, we introduce a definition "state similarity" to balance the model accuracy and analysis cost. Based on this model, a target directed test generation method is presented to generate event sequences to effectively cover the target. The experiments on several real-world apps indicate that the generated test cases based on LATTE can reach a high coverage, and with the model we can generate the event sequences to cover a given target with short event sequences

    Attention Focusing for Neural Machine Translation by Bridging Source and Target Embeddings

    Full text link
    In neural machine translation, a source sequence of words is encoded into a vector from which a target sequence is generated in the decoding phase. Differently from statistical machine translation, the associations between source words and their possible target counterparts are not explicitly stored. Source and target words are at the two ends of a long information processing procedure, mediated by hidden states at both the source encoding and the target decoding phases. This makes it possible that a source word is incorrectly translated into a target word that is not any of its admissible equivalent counterparts in the target language. In this paper, we seek to somewhat shorten the distance between source and target words in that procedure, and thus strengthen their association, by means of a method we term bridging source and target word embeddings. We experiment with three strategies: (1) a source-side bridging model, where source word embeddings are moved one step closer to the output target sequence; (2) a target-side bridging model, which explores the more relevant source word embeddings for the prediction of the target sequence; and (3) a direct bridging model, which directly connects source and target word embeddings seeking to minimize errors in the translation of ones by the others. Experiments and analysis presented in this paper demonstrate that the proposed bridging models are able to significantly improve quality of both sentence translation, in general, and alignment and translation of individual source words with target words, in particular.Comment: 9 pages, 6 figures. Accepted by ACL201

    Transfer Learning for Sequence Labeling Using Source Model and Target Data

    Full text link
    In this paper, we propose an approach for transferring the knowledge of a neural model for sequence labeling, learned from the source domain, to a new model trained on a target domain, where new label categories appear. Our transfer learning (TL) techniques enable to adapt the source model using the target data and new categories, without accessing to the source data. Our solution consists in adding new neurons in the output layer of the target model and transferring parameters from the source model, which are then fine-tuned with the target data. Additionally, we propose a neural adapter to learn the difference between the source and the target label distribution, which provides additional important information to the target model. Our experiments on Named Entity Recognition show that (i) the learned knowledge in the source model can be effectively transferred when the target data contains new categories and (ii) our neural adapter further improves such transfer.Comment: 9 pages, 4 figures, 3 tables, accepted paper in the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19

    Labeling of Unique Sequences in Double-Stranded DNA at Sites of Vicinal Nicks Generated by Nicking Endonucleases

    Get PDF
    We describe a new approach for labeling of unique sequences within dsDNA under nondenaturing conditions. The method is based on the site-specific formation of vicinal nicks, which are created by nicking endonucleases (NEases) at specified DNA sites on the same strand within dsDNA. The oligomeric segment flanked by both nicks is then substituted, in a strand displacement reaction, by an oligonucleotide probe that becomes covalently attached to the target site upon subsequent ligation. Monitoring probe hybridization and ligation reactions by electrophoretic mobility retardation assay, we show that selected target sites can be quantitatively labeled with excellent sequence specificity. In these experiments, predominantly probes carrying a target-independent 3′ terminal sequence were employed. At target labeling, thus a branched DNA structure known as 3′-flap DNA is obtained. The single-stranded terminus in 3′-flap DNA is then utilized to prime the replication of an externally supplied ssDNA circle in a rolling circle amplification (RCA) reaction. In model experiments with samples comprised of genomic λ-DNA and human herpes virus 6 type B (HHV-6B) DNA, we have used our labeling method in combination with surface RCA as reporter system to achieve both high sequence specificity of dsDNA targeting and high sensitivity of detection. The method can find applications in sensitive and specific detection of viral duplex DNA.Wallace A. Coulter Foundatio

    Root Mean Square Error of Neural Spike Train Sequence Matching with Optogenetics

    Full text link
    Optogenetics is an emerging field of neuroscience where neurons are genetically modified to express light-sensitive receptors that enable external control over when the neurons fire. Given the prominence of neuronal signaling within the brain and throughout the body, optogenetics has significant potential to improve the understanding of the nervous system and to develop treatments for neurological diseases. This paper uses a simple optogenetic model to compare the timing distortion between a randomly-generated target spike sequence and an externally-stimulated neuron spike sequence. The distortion is measured by filtering each sequence and finding the root mean square error between the two filter outputs. The expected distortion is derived in closed form when the target sequence generation rate is sufficiently low. Derivations are verified via simulations.Comment: 6 pages, 5 figures. Will be presented at IEEE Global Communications Conference (IEEE GLOBECOM 2017) in December 201
    corecore