792 research outputs found

    A Trio Neural Model for Dynamic Entity Relatedness Ranking

    Full text link
    Measuring entity relatedness is a fundamental task for many natural language processing and information retrieval applications. Prior work often studies entity relatedness in static settings and an unsupervised manner. However, entities in real-world are often involved in many different relationships, consequently entity-relations are very dynamic over time. In this work, we propose a neural networkbased approach for dynamic entity relatedness, leveraging the collective attention as supervision. Our model is capable of learning rich and different entity representations in a joint framework. Through extensive experiments on large-scale datasets, we demonstrate that our method achieves better results than competitive baselines.Comment: In Proceedings of CoNLL 201

    Wrongful Convictions and Their Causes: An Annotated Bibliography

    Get PDF
    This Annotated Bibliography directs attorneys to relevant, select legal periodical articles written from 2010 to 2016 on wrongful convictions and their causes. The authors focus on five major causes that lead to wrongful convictions, as evidenced by the literature. Part I of the Annotated Bibliography focuses on resources that discuss false confessions as a cause of wrongful convictions. Part II discusses resources that address the role of police and prosecutorial practices, including misconduct, in wrongful convictions. Part III provides articles on eyewitness and jailhouse informant issues related to wrongful convictions. Part IV contains articles that deal with how forensic evidence errors may lead to wrongful convictions. Part V provides miscellaneous articles in which other relevant issues related to wrongful convictions and their causes are addressed

    Capturing protein domain structure and function using self-supervision on domain architectures

    Get PDF
    Predicting biological properties of unseen proteins is shown to be improved by the use of protein sequence embeddings. However, these sequence embeddings have the caveat that biological metadata do not exist for each amino acid, in order to measure the quality of each unique learned embedding vector separately. Therefore, current sequence embedding cannot be intrinsically evaluated on the degree of their captured biological information in a quantitative manner. We address this drawback by our approach, dom2vec, by learning vector representation for protein domains and not for each amino acid base, as biological metadata do exist for each domain separately. To perform a reliable quantitative intrinsic evaluation in terms of biology knowledge, we selected the metadata related to the most distinctive biological characteristics of a domain, which are its structure, enzymatic, and molecular function. Notably, dom2vec obtains an adequate level of performance in the intrinsic assessment—therefore, we can draw an analogy between the local linguistic features in natural languages and the domain structure and function information in domain architectures. Moreover, we demonstrate the dom2vec applicability on protein prediction tasks, by comparing it with state-of-the-art sequence embeddings in three downstream tasks. We show that dom2vec outperforms sequence embeddings for toxin and enzymatic function prediction and is comparable with sequence embeddings in cellular location prediction. © 2021 by the authors. Licensee MDPI, Basel, Switzerland
    • …
    corecore