1,318 research outputs found

    Chi-square-based scoring function for categorization of MEDLINE citations

    Full text link
    Objectives: Text categorization has been used in biomedical informatics for identifying documents containing relevant topics of interest. We developed a simple method that uses a chi-square-based scoring function to determine the likelihood of MEDLINE citations containing genetic relevant topic. Methods: Our procedure requires construction of a genetic and a nongenetic domain document corpus. We used MeSH descriptors assigned to MEDLINE citations for this categorization task. We compared frequencies of MeSH descriptors between two corpora applying chi-square test. A MeSH descriptor was considered to be a positive indicator if its relative observed frequency in the genetic domain corpus was greater than its relative observed frequency in the nongenetic domain corpus. The output of the proposed method is a list of scores for all the citations, with the highest score given to those citations containing MeSH descriptors typical for the genetic domain. Results: Validation was done on a set of 734 manually annotated MEDLINE citations. It achieved predictive accuracy of 0.87 with 0.69 recall and 0.64 precision. We evaluated the method by comparing it to three machine learning algorithms (support vector machines, decision trees, na\"ive Bayes). Although the differences were not statistically significantly different, results showed that our chi-square scoring performs as good as compared machine learning algorithms. Conclusions: We suggest that the chi-square scoring is an effective solution to help categorize MEDLINE citations. The algorithm is implemented in the BITOLA literature-based discovery support system as a preprocessor for gene symbol disambiguation process.Comment: 34 pages, 2 figure

    Scientific knowledge in the age of computation: explicated, computable and manageable?

    Get PDF
    With increasing publication and data production, scientific knowledge presents not simply an achievement but also a challenge. Scientific publications and data are increasingly treated as resources that need to be digitally ‘managed.’ This gives rise to scientific Knowledge Management (KM):second-order scientific work aiming to systematically collect, take care of and mobilise first-hand disciplinary knowledge and data in order to provide new first-order scientific knowledge. We follow the work of Leonelli (2014, 2016), Efstathiou (2012, 2016) and Hislop (2013) in our analysis of the use of KM in semantic systems biology. Through an empirical philosophical account of KM-enabled biological research, we argue that KM helps produce new first-order biological knowledge that did not exist before, and which could not have been produced by traditional means. KM work is enabled by conceiving of ‘knowledge’ as an object for computational science: as explicated in the text of biological articles and computable via appropriate data and metadata. However, these founded knowledge concepts enabling computational KM risk focusing on only computationally tractable data as knowledge, underestimating practice-based knowing and its significance in ensuring the validity of ‘manageable’ knowledge as knowledge.; Con el aumento de la publicación y la producción de datos, el conocimiento científico no solo es reconocido como un logro, sino también como un desafío. Las publicaciones y los datos científicos se tratan cada vez más como recursos que deben ser ‘gestionados’ digitalmente. Esto da lugar a la Gestión del Conocimiento científico (Knowledge Management (KM)): labor científica de segundo orden destinada a recopilar, cuidar y movilizar de forma directa el conocimiento disciplinario de primera mano y los datos para proporcionar nuevos conocimientos científicos de primer orden. Seguimos el trabajo de Leonelli (2014, 2016), Efstathiou (2012, 2016) y Hislop (2013) en nuestro análisis del uso de la KM en la biología de sistemas semánticos. A través de una descripción filosófica empírica de la investigación biológica habilitada para KM, argumentamos que KM ayuda a producir un nuevo conocimiento biológico de primer orden que no existía antes y que no podría haber sido producido por medios tradicionales. El trabajo de KM está facultado para concebir el “conocimiento” como un objeto para la ciencia computacional: como algo explicitado en el texto de artículos biológicos y como computable a través de datos y metadatos apropiados. Sin embargo, los conceptos fundados permiten el riesgo computacional de KM de centrarse solo en los datos que se pueden tratar de manera computacional como conocimiento, subestimando el conocimiento basado en la práctica y su importancia para garantizar la validez del conocimiento “manejable” como conocimiento

    Predisposition to Cancer Caused by Genetic and Functional Defects of Mammalian Atad5

    Get PDF
    ATAD5, the human ortholog of yeast Elg1, plays a role in PCNA deubiquitination. Since PCNA modification is important to regulate DNA damage bypass, ATAD5 may be important for suppression of genomic instability in mammals in vivo. To test this hypothesis, we generated heterozygous (Atad5+/m) mice that were haploinsuffficient for Atad5. Atad5+/m mice displayed high levels of genomic instability in vivo, and Atad5+/m mouse embryonic fibroblasts (MEFs) exhibited molecular defects in PCNA deubiquitination in response to DNA damage, as well as DNA damage hypersensitivity and high levels of genomic instability, apoptosis, and aneuploidy. Importantly, 90% of haploinsufficient Atad5+/m mice developed tumors, including sarcomas, carcinomas, and adenocarcinomas, between 11 and 20 months of age. High levels of genomic alterations were evident in tumors that arose in the Atad5+/m mice. Consistent with a role for Atad5 in suppressing tumorigenesis, we also identified somatic mutations of ATAD5 in 4.6% of sporadic human endometrial tumors, including two nonsense mutations that resulted in loss of proper ATAD5 function. Taken together, our findings indicate that loss-of-function mutations in mammalian Atad5 are sufficient to cause genomic instability and tumorigenesis

    The High-Throughput Analyses Era: Are We Ready for the Data Struggle?

    Get PDF
    Recent and rapid technological advances in molecular sciences have dramatically increased the ability to carry out high-throughput studies characterized by big data production. This, in turn, led to the consequent negative effect of highlighting the presence of a gap between data yield and their analysis. Indeed, big data management is becoming an increasingly important aspect of many fields of molecular research including the study of human diseases. Now, the challenge is to identify, within the huge amount of data obtained, that which is of clinical relevance. In this context, issues related to data interpretation, sharing and storage need to be assessed and standardized. Once this is achieved, the integration of data from different -omic approaches will improve the diagnosis, monitoring and therapy of diseases by allowing the identification of novel, potentially actionably biomarkers in view of personalized medicine

    Classifying distinct data types: textual streams protein sequences and genomic variants

    Get PDF
    Artificial Intelligence (AI) is an interdisciplinary field combining different research areas with the end goal to automate processes in the everyday life and industry. The fundamental components of AI models are an “intelligent” model and a functional component defined by the end-application. That is, an intelligent model can be a statistical model that can recognize patterns in data instances to distinguish differences in between these instances. For example, if the AI is applied in car manufacturing, based on an image of a part of a car, the model can categorize if the car part is in the front, middle or rear compartment of the car, as a human brain would do. For the same example application, the statistical model informs a mechanical arm, the functional component, for the current car compartment and the arm in turn assembles this compartment, of the car, based on predefined instructions, likely as a human hand would follow human brain neural signals. A crucial step of AI applications is the classification of input instances by the intelligent model. The classification step in the intelligent model pipeline allows the subsequent steps to act in similar fashion for instances belonging to the same category. We define as classification the module of the intelligent model, which categorizes the input instances based on predefined human-expert or data-driven produced patterns of the instances. Irrespectively of the method to find patterns in data, classification is composed of four distinct steps: (i) input representation, (ii) model building (iii) model prediction and (iv) model assessment. Based on these classification steps, we argue that applying classification on distinct data types holds different challenges. In this thesis, I focus on challenges for three distinct classification scenarios: (i) Textual Streams: how to advance the model building step, commonly used for static distribution of data, to classify textual posts with transient data distribution? (ii) Protein Prediction: which biologically meaningful information can be used in the input representation step to overcome the limited training data challenge? (iii) Human Variant Pathogenicity Prediction: how to develop a classification system for functional impact of human variants, by providing standardized and well accepted evidence for the classification outcome and thus enabling the model assessment step? To answer these research questions, I present my contributions in classifying these different types of data: temporalMNB: I adapt the sequential prediction with expert advice paradigm to optimally aggregate complementary distributions to enhance a Naive Bayes model to adapt on drifting distribution of the characteristics of the textual posts. dom2vec: our proposal to learn embedding vectors for the protein domains using self-supervision. Based on the high performance achieved by the dom2vec embeddings in quantitative intrinsic assessment on the captured biological information, I provide example evidence for an analogy between the local linguistic features in natural languages and the domain structure and function information in domain architectures. Last, I describe GenOtoScope bioinformatics software tool to automate standardized evidence-based criteria for pathogenicity impact of variants associated with hearing loss. Finally, to increase the practical use of our last contribution, I develop easy-to-use software interfaces to be used, in research settings, by clinical diagnostics personnel.Künstliche Intelligenz (KI) ist ein interdisziplinäres Gebiet, das verschiedene Forschungsbereiche mit dem Ziel verbindet, Prozesse im Alltag und in der Industrie zu automatisieren. Die grundlegenden Komponenten von KI-Modellen sind ein “intelligentes” Modell und eine durch die Endanwendung definierte funktionale Komponente. Das heißt, ein intelligentes Modell kann ein statistisches Modell sein, das Muster in Dateninstanzen erkennen kann, um Unterschiede zwischen diesen Instanzen zu unterscheiden. Wird die KI beispielsweise in der Automobilherstellung eingesetzt, kann das Modell auf der Grundlage eines Bildes eines Autoteils kategorisieren, ob sich das Autoteil im vorderen, mittleren oder hinteren Bereich des Autos befindet, wie es ein menschliches Gehirn tun würde. Bei der gleichen Beispielanwendung informiert das statistische Modell einen mechanischen Arm, die funktionale Komponente, über den aktuellen Fahrzeugbereich, und der Arm wiederum baut diesen Bereich des Fahrzeugs auf der Grundlage vordefinierter Anweisungen zusammen, so wie eine menschliche Hand den neuronalen Signalen des menschlichen Gehirns folgen würde. Ein entscheidender Schritt bei KI-Anwendungen ist die Klassifizierung von Eingabeinstanzen durch das intelligente Modell. Unabhängig von der Methode zum Auffinden von Mustern in Daten besteht die Klassifizierung aus vier verschiedenen Schritten: (i) Eingabedarstellung, (ii) Modellbildung, (iii) Modellvorhersage und (iv) Modellbewertung. Ausgehend von diesen Klassifizierungsschritten argumentiere ich, dass die Anwendung der Klassifizierung auf verschiedene Datentypen unterschiedliche Herausforderungen mit sich bringt. In dieser Arbeit konzentriere ich uns auf die Herausforderungen für drei verschiedene Klassifizierungsszenarien: (i) Textdatenströme: Wie kann der Schritt der Modellerstellung, der üblicherweise für eine statische Datenverteilung verwendet wird, weiterentwickelt werden, um die Klassifizierung von Textbeiträgen mit einer instationären Datenverteilung zu erlernen? (ii) Proteinvorhersage: Welche biologisch sinnvollen Informationen können im Schritt der Eingabedarstellung verwendet werden, um die Herausforderung der begrenzten Trainingsdaten zu überwinden? (iii) Vorhersage der Pathogenität menschlicher Varianten: Wie kann ein Klassifizierungssystem für die funktionellen Auswirkungen menschlicher Varianten entwickelt werden, indem standardisierte und anerkannte Beweise für das Klassifizierungsergebnis bereitgestellt werden und somit der Schritt der Modellbewertung ermöglicht wird? Um diese Forschungsfragen zu beantworten, stelle ich meine Beiträge zur Klassifizierung dieser verschiedenen Datentypen vor: temporalMNB: Verbesserung des Naive-Bayes-Modells zur Klassifizierung driftender Textströme durch Ensemble-Lernen. dom2vec: Lernen von Einbettungsvektoren für Proteindomänen durch Selbstüberwachung. Auf der Grundlage der berichteten Ergebnisse liefere ich Beispiele für eine Analogie zwischen den lokalen linguistischen Merkmalen in natürlichen Sprachen und den Domänenstruktur- und Funktionsinformationen in Domänenarchitekturen. Schließlich beschreibe ich ein bioinformatisches Softwaretool, GenOtoScope, zur Automatisierung standardisierter evidenzbasierter Kriterien für die orthogenitätsauswirkungen von Varianten, die mit angeborener Schwerhörigkeit in Verbindung stehen

    Fifteen new risk loci for coronary artery disease highlight arterial-wall-specific mechanisms

    Get PDF
    Coronary artery disease (CAD) is a leading cause of morbidity and mortality worldwide. Although 58 genomic regions have been associated with CAD thus far, most of the heritability is unexplained, indicating that additional susceptibility loci await identification. An efficient discovery strategy may be larger-scale evaluation of promising associations suggested by genome-wide association studies (GWAS). Hence, we genotyped 56,309 participants using a targeted gene array derived from earlier GWAS results and performed meta-analysis of results with 194,427 participants previously genotyped, totaling 88,192 CAD cases and 162,544 controls. We identified 25 new SNP-CAD associations (P < 5 × 10(-8), in fixed-effects meta-analysis) from 15 genomic regions, including SNPs in or near genes involved in cellular adhesion, leukocyte migration and atherosclerosis (PECAM1, rs1867624), coagulation and inflammation (PROCR, rs867186 (p.Ser219Gly)) and vascular smooth muscle cell differentiation (LMOD1, rs2820315). Correlation of these regions with cell-type-specific gene expression and plasma protein levels sheds light on potential disease mechanisms
    corecore