105,663 research outputs found

    BioGUID: resolving, discovering, and minting identifiers for biodiversity informatics

    Get PDF
    Background: Linking together the data of interest to biodiversity researchers (including specimen records, images, taxonomic names, and DNA sequences) requires services that can mint, resolve, and discover globally unique identifiers (including, but not limited to, DOIs, HTTP URIs, and LSIDs). Results: BioGUID implements a range of services, the core ones being an OpenURL resolver for bibliographic resources, and a LSID resolver. The LSID resolver supports Linked Data-friendly resolution using HTTP 303 redirects and content negotiation. Additional services include journal ISSN look-up, author name matching, and a tool to monitor the status of biodiversity data providers. Conclusion: BioGUID is available at http://bioguid.info/. Source code is available from http://code.google.com/p/bioguid/

    3D time series analysis of cell shape using Laplacian approaches

    Get PDF
    Background: Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes. Results: We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells. Conclusions: The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations

    Information transfer in signaling pathways : a study using coupled simulated and experimental data

    Get PDF
    Background: The topology of signaling cascades has been studied in quite some detail. However, how information is processed exactly is still relatively unknown. Since quite diverse information has to be transported by one and the same signaling cascade (e.g. in case of different agonists), it is clear that the underlying mechanism is more complex than a simple binary switch which relies on the mere presence or absence of a particular species. Therefore, finding means to analyze the information transferred will help in deciphering how information is processed exactly in the cell. Using the information-theoretic measure transfer entropy, we studied the properties of information transfer in an example case, namely calcium signaling under different cellular conditions. Transfer entropy is an asymmetric and dynamic measure of the dependence of two (nonlinear) stochastic processes. We used calcium signaling since it is a well-studied example of complex cellular signaling. It has been suggested that specific information is encoded in the amplitude, frequency and waveform of the oscillatory Ca2+-signal. Results: We set up a computational framework to study information transfer, e.g. for calcium signaling at different levels of activation and different particle numbers in the system. We stochastically coupled simulated and experimentally measured calcium signals to simulated target proteins and used kernel density methods to estimate the transfer entropy from these bivariate time series. We found that, most of the time, the transfer entropy increases with increasing particle numbers. In systems with only few particles, faithful information transfer is hampered by random fluctuations. The transfer entropy also seems to be slightly correlated to the complexity (spiking, bursting or irregular oscillations) of the signal. Finally, we discuss a number of peculiarities of our approach in detail. Conclusion: This study presents the first application of transfer entropy to biochemical signaling pathways. We could quantify the information transferred from simulated/experimentally measured calcium signals to a target enzyme under different cellular conditions. Our approach, comprising stochastic coupling and using the information-theoretic measure transfer entropy, could also be a valuable tool for the analysis of other signaling pathways

    Information transfer in signaling pathways : a study using coupled simulated and experimental data

    Get PDF
    Background: The topology of signaling cascades has been studied in quite some detail. However, how information is processed exactly is still relatively unknown. Since quite diverse information has to be transported by one and the same signaling cascade (e.g. in case of different agonists), it is clear that the underlying mechanism is more complex than a simple binary switch which relies on the mere presence or absence of a particular species. Therefore, finding means to analyze the information transferred will help in deciphering how information is processed exactly in the cell. Using the information-theoretic measure transfer entropy, we studied the properties of information transfer in an example case, namely calcium signaling under different cellular conditions. Transfer entropy is an asymmetric and dynamic measure of the dependence of two (nonlinear) stochastic processes. We used calcium signaling since it is a well-studied example of complex cellular signaling. It has been suggested that specific information is encoded in the amplitude, frequency and waveform of the oscillatory Ca2+-signal. Results: We set up a computational framework to study information transfer, e.g. for calcium signaling at different levels of activation and different particle numbers in the system. We stochastically coupled simulated and experimentally measured calcium signals to simulated target proteins and used kernel density methods to estimate the transfer entropy from these bivariate time series. We found that, most of the time, the transfer entropy increases with increasing particle numbers. In systems with only few particles, faithful information transfer is hampered by random fluctuations. The transfer entropy also seems to be slightly correlated to the complexity (spiking, bursting or irregular oscillations) of the signal. Finally, we discuss a number of peculiarities of our approach in detail. Conclusion: This study presents the first application of transfer entropy to biochemical signaling pathways. We could quantify the information transferred from simulated/experimentally measured calcium signals to a target enzyme under different cellular conditions. Our approach, comprising stochastic coupling and using the information-theoretic measure transfer entropy, could also be a valuable tool for the analysis of other signaling pathways

    Large-scale event extraction from literature with multi-level gene normalization

    Get PDF
    Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license

    Impact of variance components on reliability of absolute quantification using digital PCR

    Get PDF
    Background: Digital polymerase chain reaction (dPCR) is an increasingly popular technology for detecting and quantifying target nucleic acids. Its advertised strength is high precision absolute quantification without needing reference curves. The standard data analytic approach follows a seemingly straightforward theoretical framework but ignores sources of variation in the data generating process. These stem from both technical and biological factors, where we distinguish features that are 1) hard-wired in the equipment, 2) user-dependent and 3) provided by manufacturers but may be adapted by the user. The impact of the corresponding variance components on the accuracy and precision of target concentration estimators presented in the literature is studied through simulation. Results: We reveal how system-specific technical factors influence accuracy as well as precision of concentration estimates. We find that a well-chosen sample dilution level and modifiable settings such as the fluorescence cut-off for target copy detection have a substantial impact on reliability and can be adapted to the sample analysed in ways that matter. User-dependent technical variation, including pipette inaccuracy and specific sources of sample heterogeneity, leads to a steep increase in uncertainty of estimated concentrations. Users can discover this through replicate experiments and derived variance estimation. Finally, the detection performance can be improved by optimizing the fluorescence intensity cut point as suboptimal thresholds reduce the accuracy of concentration estimates considerably. Conclusions: Like any other technology, dPCR is subject to variation induced by natural perturbations, systematic settings as well as user-dependent protocols. Corresponding uncertainty may be controlled with an adapted experimental design. Our findings point to modifiable key sources of uncertainty that form an important starting point for the development of guidelines on dPCR design and data analysis with correct precision bounds. Besides clever choices of sample dilution levels, experiment-specific tuning of machine settings can greatly improve results. Well-chosen data-driven fluorescence intensity thresholds in particular result in major improvements in target presence detection. We call on manufacturers to provide sufficiently detailed output data that allows users to maximize the potential of the method in their setting and obtain high precision and accuracy for their experiments

    An optimized TOPS+ comparison method for enhanced TOPS models

    Get PDF
    This article has been made available through the Brunel Open Access Publishing Fund.Background Although methods based on highly abstract descriptions of protein structures, such as VAST and TOPS, can perform very fast protein structure comparison, the results can lack a high degree of biological significance. Previously we have discussed the basic mechanisms of our novel method for structure comparison based on our TOPS+ model (Topological descriptions of Protein Structures Enhanced with Ligand Information). In this paper we show how these results can be significantly improved using parameter optimization, and we call the resulting optimised TOPS+ method as advanced TOPS+ comparison method i.e. advTOPS+. Results We have developed a TOPS+ string model as an improvement to the TOPS [1-3] graph model by considering loops as secondary structure elements (SSEs) in addition to helices and strands, representing ligands as first class objects, and describing interactions between SSEs, and SSEs and ligands, by incoming and outgoing arcs, annotating SSEs with the interaction direction and type. Benchmarking results of an all-against-all pairwise comparison using a large dataset of 2,620 non-redundant structures from the PDB40 dataset [4] demonstrate the biological significance, in terms of SCOP classification at the superfamily level, of our TOPS+ comparison method. Conclusions Our advanced TOPS+ comparison shows better performance on the PDB40 dataset [4] compared to our basic TOPS+ method, giving 90 percent accuracy for SCOP alpha+beta; a 6 percent increase in accuracy compared to the TOPS and basic TOPS+ methods. It also outperforms the TOPS, basic TOPS+ and SSAP comparison methods on the Chew-Kedem dataset [5], achieving 98 percent accuracy. Software Availability: The TOPS+ comparison server is available at http://balabio.dcs.gla.ac.uk/mallika/WebTOPS/.This article is available through the Brunel Open Access Publishing Fun
    • …
    corecore