6,918 research outputs found

    Identification of Five Putative Yeast RNA Helicase Genes

    Get PDF
    The RNA helicase gene family encodes a group of eight homologous proteins that share regions of sequence similarity. This group of evolutionarily conserved proteins presumably all utilize ATP (or some other nucleoside triphosphate) as an energy source for unwinding double-stranded RNA. Members of this family have been implicated in a variety of physiological functions in organisms ranging from Escherichia coli to human, such as translation initiation, mitochondrial mRNA splicing, ribosomal assembly, and germinal line cell differentiation. We have applied polymerase chain reaction technology to search for additional members of the RNA helicase family in the yeast Saccharomyces cerevisiae. Using degenerate oligonucleotide primers designed to amplify DNA fragments flanked by the highly conserved motifs V L D E A D and Y I H R I G, we have detected five putative RNA helicase genes. Northern and Southern blot analyses demonstrated that these genes are single copy and expressed in yeast. Several members of the RNA helicase family share sequence identity ranging from 49.2% to 67.2%, suggesting that they are functionally related. The discovery of such a multitude of putative RNA helicase genes in yeast suggests that RNA helicase activities are involved in a variety of fundamentally important biological processes

    Multimodal Storytelling via Generative Adversarial Imitation Learning

    Full text link
    Deriving event storylines is an effective summarization method to succinctly organize extensive information, which can significantly alleviate the pain of information overload. The critical challenge is the lack of widely recognized definition of storyline metric. Prior studies have developed various approaches based on different assumptions about users' interests. These works can extract interesting patterns, but their assumptions do not guarantee that the derived patterns will match users' preference. On the other hand, their exclusiveness of single modality source misses cross-modality information. This paper proposes a method, multimodal imitation learning via generative adversarial networks(MIL-GAN), to directly model users' interests as reflected by various data. In particular, the proposed model addresses the critical challenge by imitating users' demonstrated storylines. Our proposed model is designed to learn the reward patterns given user-provided storylines and then applies the learned policy to unseen data. The proposed approach is demonstrated to be capable of acquiring the user's implicit intent and outperforming competing methods by a substantial margin with a user study.Comment: IJCAI 201

    Compatibility Family Learning for Item Recommendation and Generation

    Full text link
    Compatibility between items, such as clothes and shoes, is a major factor among customer's purchasing decisions. However, learning "compatibility" is challenging due to (1) broader notions of compatibility than those of similarity, (2) the asymmetric nature of compatibility, and (3) only a small set of compatible and incompatible items are observed. We propose an end-to-end trainable system to embed each item into a latent vector and project a query item into K compatible prototypes in the same space. These prototypes reflect the broad notions of compatibility. We refer to both the embedding and prototypes as "Compatibility Family". In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item. We evaluate our system on a toy dataset, two Amazon product datasets, and Polyvore outfit dataset. Our method consistently achieves state-of-the-art performance. Finally, we show that we can visualize the candidate compatible prototypes using a Metric-regularized Conditional Generative Adversarial Network (MrCGAN), where the input is a projected prototype and the output is a generated image of a compatible item. We ask human evaluators to judge the relative compatibility between our generated images and images generated by CGANs conditioned directly on query items. Our generated images are significantly preferred, with roughly twice the number of votes as others.Comment: 9 pages, accepted to AAAI 201

    Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks

    Full text link
    Modeling and forecasting forward citations to a patent is a central task for the discovery of emerging technologies and for measuring the pulse of inventive progress. Conventional methods for forecasting these forward citations cast the problem as analysis of temporal point processes which rely on the conditional intensity of previously received citations. Recent approaches model the conditional intensity as a chain of recurrent neural networks to capture memory dependency in hopes of reducing the restrictions of the parametric form of the intensity function. For the problem of patent citations, we observe that forecasting a patent's chain of citations benefits from not only the patent's history itself but also from the historical citations of assignees and inventors associated with that patent. In this paper, we propose a sequence-to-sequence model which employs an attention-of-attention mechanism to capture the dependencies of these multiple time sequences. Furthermore, the proposed model is able to forecast both the timestamp and the category of a patent's next citation. Extensive experiments on a large patent citation dataset collected from USPTO demonstrate that the proposed model outperforms state-of-the-art models at forward citation forecasting
    • …
    corecore