145 research outputs found

    Direct Use of Information Extraction from Scientific Text for Modeling and Simulation in the Life Sciences

    Get PDF
    Purpose: To demonstrate how the information extracted from scientific text can be directly used in support of life science research projects. In modern digital-based research and academic libraries, librarians should be able to support data discovery and organization of digital entities in order to foster research projects effectively; thus we speculate that text mining and knowledge discovery tools could be of great assistance to librarians. Such tools simply enable librarians to overcome increasing complexity in the number as well as contents of scientific literature, especially in the emerging interdisciplinary fields of science. In this paper we present an example of how evidences extracted from scientific literature can be directly integrated into in silico disease models in support of drug discovery projects. Design/methodology/approach: The application of text-mining as well as knowledge discovery tools are explained in the form of a knowledge-based workflow for drug target candidate identification. Moreover, we propose an in silico experimentation framework for the enhancement of efficiency and productivity in the early steps of the drug discovery workflow. Findings: Our in silico experimentation workflow has been successfully applied to searching for hit and lead compounds in the World-wide In Silico Docking On Malaria (WISDOM) project and to finding novel inhibitor candidates. Practical implications: Direct extraction of biological information from text will ease the task of librarians in managing digital objects and supporting research projects. We expect that textual data will play an increasingly important role in evidence-based approaches taken by biomedical and translational researchers. Originality / value: Our proposed approach provides a practical example for the direct integration of text- and knowledge-based data into life science research projects, with the emphasis on its application by academic and research libraries in support of scientific projects

    Models and methods: a perspective of the impact of six IMI translational data-centric initiatives for Alzheimer’s disease and other neuropsychiatric disorders

    Get PDF
    The Innovative Medicines Initiative (IMI), was a European public–private partnership (PPP) undertaking intended to improve the drug development process, facilitate biomarker development, accelerate clinical trial timelines, improve success rates, and generally increase the competitiveness of European pharmaceutical sector research. Through the IMI, pharmaceutical research interests and the research agenda of the EU are supported by academic partnership and financed by both the pharmaceutical companies and public funds. Since its inception, the IMI has funded dozens of research partnerships focused on solving the core problems that have consistently obstructed the translation of research into clinical success. In this post-mortem review paper, we focus on six research initiatives that tackled foundational challenges of this nature: Aetionomy, EMIF, EPAD, EQIPD, eTRIKS, and PRISM. Several of these initiatives focused on neurodegenerative diseases; we therefore discuss the state of neurodegenerative research both at the start of the IMI and now, and the contributions that IMI partnerships made to progress in the field. Many of the initiatives we review had goals including, but not limited to, the establishment of translational, data-centric initiatives and the implementation of trans-diagnostic approaches that move beyond the candidate disease approach to assess symptom etiology without bias, challenging the construct of disease diagnosis. We discuss the successes of these initiatives, the challenges faced, and the merits and shortcomings of the IMI approach with participating senior scientists for each. Here, we distill their perspectives on the lessons learned, with an aim to positively impact funding policy and approaches in the future

    Integration and mining of malaria molecular, functional and pharmacological data: how far are we from a chemogenomic knowledge space?

    Get PDF
    The organization and mining of malaria genomic and post-genomic data is highly motivated by the necessity to predict and characterize new biological targets and new drugs. Biological targets are sought in a biological space designed from the genomic data from Plasmodium falciparum, but using also the millions of genomic data from other species. Drug candidates are sought in a chemical space containing the millions of small molecules stored in public and private chemolibraries. Data management should therefore be as reliable and versatile as possible. In this context, we examined five aspects of the organization and mining of malaria genomic and post-genomic data: 1) the comparison of protein sequences including compositionally atypical malaria sequences, 2) the high throughput reconstruction of molecular phylogenies, 3) the representation of biological processes particularly metabolic pathways, 4) the versatile methods to integrate genomic data, biological representations and functional profiling obtained from X-omic experiments after drug treatments and 5) the determination and prediction of protein structures and their molecular docking with drug candidate structures. Progresses toward a grid-enabled chemogenomic knowledge space are discussed.Comment: 43 pages, 4 figures, to appear in Malaria Journa

    Contribution of syndecans to cellular internalization and fibrillation of amyloid-β (1–42)

    Get PDF
    Intraneuronal accumulation of amyloid-beta(1-42) (A beta 1-42) is one of the earliest signs of Alzheimer's disease (AD). Cell surface heparan sulfate proteoglycans (HSPGs) have profound influence on the cellular uptake of A beta 1-42 by mediating its attachment and subsequent internalization into the cells. Colocalization of amyloid plaques with members of the syndecan family of HSPGs, along with the increased expression of syndecan-3 and -4 have already been reported in postmortem AD brains. Considering the growing evidence on the involvement of syndecans in the pathogenesis of AD, we analyzed the contribution of syndecans to cellular uptake and fibrillation of A beta 1-42. Among syndecans, the neuron specific syndecan-3 isoform increased cellular uptake of A beta 1-42 the most. Kinetics of A beta 1-42 uptake also proved to be fairly different among SDC family members: syndecan-3 increased A beta 1-42 uptake from the earliest time points, while other syndecans facilitated A beta 1-42 internalization at a slower pace. Internalized A beta 1-42 colocalized with syndecans and flotillins, highlighting the role of lipid-rafts in syndecan-mediated uptake. Syndecan-3 and 4 also triggered fibrillation of A beta 1-42, further emphasizing the pathophysiological relevance of syndecans in plaque formation. Overall our data highlight syndecans, especially the neuron-specific syndecan-3 isoform, as important players in amyloid pathology and show that syndecans, regardless of cell type, facilitate key molecular events in neurodegeneration

    Patent Retrieval in Chemistry based on semantically tagged Named Entities

    Get PDF
    Gurulingappa H, MĂĽller B, Klinger R, et al. Patent Retrieval in Chemistry based on semantically tagged Named Entities. In: Voorhees EM, Buckland LP, eds. The Eighteenth Text RETrieval Conference (TREC 2009) Proceedings. Gaithersburg, Maryland, USA; 2009.This paper reports on the work that has been conducted by Fraunhofer SCAI for Trec Chemistry (Trec-Chem) track 2009. The team of Fraunhofer SCAI participated in two tasks, namely Technology Survey and Prior Art Search. The core of the framework is an index of 1.2 million chemical patents provided as a data set by Trec. For the technology survey, three runs were submitted based on semantic dictionaries and noun phrases. For the prior art search task, several elds were introduced into the index that contained normalized noun phrases, biomedical as well as chemical entities. Altogether, 36 runs were submitted for this task that were based on automatic querying with tokens, noun phrases and entities along with dierent search strategies

    CTO: A Community-Based Clinical Trial Ontology and Its Applications in PubChemRDF and SCAIViewH

    Get PDF
    Driven by the use cases of PubChemRDF and SCAIView, we have developed a first community-based clinical trial ontology (CTO) by following the OBO Foundry principles. CTO uses the Basic Formal Ontology (BFO) as the top level ontology and reuses many terms from existing ontologies. CTO has also defined many clinical trial-specific terms. The general CTO design pattern is based on the PICO framework together with two applications. First, the PubChemRDF use case demonstrates how a drug Gleevec is linked to multiple clinical trials investigating Gleevec’s related chemical compounds. Second, the SCAIView text mining engine shows how the use of CTO terms in its search algorithm can identify publications referring to COVID-19-related clinical trials. Future opportunities and challenges are discussed

    Towards a 21st-century roadmap for biomedical research and drug discovery:consensus report and recommendations

    Get PDF
    Decades of costly failures in translating drug candidates from preclinical disease models to human therapeutic use warrant reconsideration of the priority placed on animal models in biomedical research. Following an international workshop attended by experts from academia, government institutions, research funding bodies, and the corporate and nongovernmental organisation (NGO) sectors, in this consensus report, we analyse, as case studies, five disease areas with major unmet needs for new treatments. In view of the scientifically driven transition towards a human pathway-based paradigm in toxicology, a similar paradigm shift appears to be justified in biomedical research. There is a pressing need for an approach that strategically implements advanced, human biology-based models and tools to understand disease pathways at multiple biological scales. We present recommendations to help achieve this

    Data sharing in neurodegenerative disease research: challenges and learnings from the innovative medicines initiative public-private partnership model

    Get PDF
    Efficient data sharing is hampered by an array of organizational, ethical, behavioral, and technical challenges, slowing research progress and reducing the utility of data generated by clinical research studies on neurodegenerative diseases. There is a particular need to address differences between public and private sector environments for research and data sharing, which have varying standards, expectations, motivations, and interests. The Neuronet data sharing Working Group was set up to understand the existing barriers to data sharing in public-private partnership projects, and to provide guidance to overcome these barriers, by convening data sharing experts from diverse projects in the IMI neurodegeneration portfolio. In this policy and practice review, we outline the challenges and learnings of the WG, providing the neurodegeneration community with examples of good practices and recommendations on how to overcome obstacles to data sharing. These obstacles span organizational issues linked to the unique structure of cross-sectoral, collaborative research initiatives, to technical issues that affect the storage, structure and annotations of individual datasets. We also identify sociotechnical hurdles, such as academic recognition and reward systems that disincentivise data sharing, and legal challenges linked to heightened perceptions of data privacy risk, compounded by a lack of clear guidance on GDPR compliance mechanisms for public-private research. Focusing on real-world, neuroimaging and digital biomarker data, we highlight particular challenges and learnings for data sharing, such as data management planning, development of ethical codes of conduct, and harmonization of protocols and curation processes. Cross-cutting solutions and enablers include the principles of transparency, standardization and co-design – from open, accessible metadata catalogs that enhance findability of data, to measures that increase visibility and trust in data reuse
    • …
    corecore