105,755 research outputs found

    BioGUID: resolving, discovering, and minting identifiers for biodiversity informatics

    Get PDF
    Background: Linking together the data of interest to biodiversity researchers (including specimen records, images, taxonomic names, and DNA sequences) requires services that can mint, resolve, and discover globally unique identifiers (including, but not limited to, DOIs, HTTP URIs, and LSIDs). Results: BioGUID implements a range of services, the core ones being an OpenURL resolver for bibliographic resources, and a LSID resolver. The LSID resolver supports Linked Data-friendly resolution using HTTP 303 redirects and content negotiation. Additional services include journal ISSN look-up, author name matching, and a tool to monitor the status of biodiversity data providers. Conclusion: BioGUID is available at http://bioguid.info/. Source code is available from http://code.google.com/p/bioguid/

    Information transfer in signaling pathways : a study using coupled simulated and experimental data

    Get PDF
    Background: The topology of signaling cascades has been studied in quite some detail. However, how information is processed exactly is still relatively unknown. Since quite diverse information has to be transported by one and the same signaling cascade (e.g. in case of different agonists), it is clear that the underlying mechanism is more complex than a simple binary switch which relies on the mere presence or absence of a particular species. Therefore, finding means to analyze the information transferred will help in deciphering how information is processed exactly in the cell. Using the information-theoretic measure transfer entropy, we studied the properties of information transfer in an example case, namely calcium signaling under different cellular conditions. Transfer entropy is an asymmetric and dynamic measure of the dependence of two (nonlinear) stochastic processes. We used calcium signaling since it is a well-studied example of complex cellular signaling. It has been suggested that specific information is encoded in the amplitude, frequency and waveform of the oscillatory Ca2+-signal. Results: We set up a computational framework to study information transfer, e.g. for calcium signaling at different levels of activation and different particle numbers in the system. We stochastically coupled simulated and experimentally measured calcium signals to simulated target proteins and used kernel density methods to estimate the transfer entropy from these bivariate time series. We found that, most of the time, the transfer entropy increases with increasing particle numbers. In systems with only few particles, faithful information transfer is hampered by random fluctuations. The transfer entropy also seems to be slightly correlated to the complexity (spiking, bursting or irregular oscillations) of the signal. Finally, we discuss a number of peculiarities of our approach in detail. Conclusion: This study presents the first application of transfer entropy to biochemical signaling pathways. We could quantify the information transferred from simulated/experimentally measured calcium signals to a target enzyme under different cellular conditions. Our approach, comprising stochastic coupling and using the information-theoretic measure transfer entropy, could also be a valuable tool for the analysis of other signaling pathways

    Impact of variance components on reliability of absolute quantification using digital PCR

    Get PDF
    Background: Digital polymerase chain reaction (dPCR) is an increasingly popular technology for detecting and quantifying target nucleic acids. Its advertised strength is high precision absolute quantification without needing reference curves. The standard data analytic approach follows a seemingly straightforward theoretical framework but ignores sources of variation in the data generating process. These stem from both technical and biological factors, where we distinguish features that are 1) hard-wired in the equipment, 2) user-dependent and 3) provided by manufacturers but may be adapted by the user. The impact of the corresponding variance components on the accuracy and precision of target concentration estimators presented in the literature is studied through simulation. Results: We reveal how system-specific technical factors influence accuracy as well as precision of concentration estimates. We find that a well-chosen sample dilution level and modifiable settings such as the fluorescence cut-off for target copy detection have a substantial impact on reliability and can be adapted to the sample analysed in ways that matter. User-dependent technical variation, including pipette inaccuracy and specific sources of sample heterogeneity, leads to a steep increase in uncertainty of estimated concentrations. Users can discover this through replicate experiments and derived variance estimation. Finally, the detection performance can be improved by optimizing the fluorescence intensity cut point as suboptimal thresholds reduce the accuracy of concentration estimates considerably. Conclusions: Like any other technology, dPCR is subject to variation induced by natural perturbations, systematic settings as well as user-dependent protocols. Corresponding uncertainty may be controlled with an adapted experimental design. Our findings point to modifiable key sources of uncertainty that form an important starting point for the development of guidelines on dPCR design and data analysis with correct precision bounds. Besides clever choices of sample dilution levels, experiment-specific tuning of machine settings can greatly improve results. Well-chosen data-driven fluorescence intensity thresholds in particular result in major improvements in target presence detection. We call on manufacturers to provide sufficiently detailed output data that allows users to maximize the potential of the method in their setting and obtain high precision and accuracy for their experiments

    3D time series analysis of cell shape using Laplacian approaches

    Get PDF
    Background: Fundamental cellular processes such as cell movement, division or food uptake critically depend on cells being able to change shape. Fast acquisition of three-dimensional image time series has now become possible, but we lack efficient tools for analysing shape deformations in order to understand the real three-dimensional nature of shape changes. Results: We present a framework for 3D+time cell shape analysis. The main contribution is three-fold: First, we develop a fast, automatic random walker method for cell segmentation. Second, a novel topology fixing method is proposed to fix segmented binary volumes without spherical topology. Third, we show that algorithms used for each individual step of the analysis pipeline (cell segmentation, topology fixing, spherical parameterization, and shape representation) are closely related to the Laplacian operator. The framework is applied to the shape analysis of neutrophil cells. Conclusions: The method we propose for cell segmentation is faster than the traditional random walker method or the level set method, and performs better on 3D time-series of neutrophil cells, which are comparatively noisy as stacks have to be acquired fast enough to account for cell motion. Our method for topology fixing outperforms the tools provided by SPHARM-MAT and SPHARM-PDM in terms of their successful fixing rates. The different tasks in the presented pipeline for 3D+time shape analysis of cells can be solved using Laplacian approaches, opening the possibility of eventually combining individual steps in order to speed up computations

    Large-scale event extraction from literature with multi-level gene normalization

    Get PDF
    Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license

    Information transfer in signaling pathways : a study using coupled simulated and experimental data

    Get PDF
    Background: The topology of signaling cascades has been studied in quite some detail. However, how information is processed exactly is still relatively unknown. Since quite diverse information has to be transported by one and the same signaling cascade (e.g. in case of different agonists), it is clear that the underlying mechanism is more complex than a simple binary switch which relies on the mere presence or absence of a particular species. Therefore, finding means to analyze the information transferred will help in deciphering how information is processed exactly in the cell. Using the information-theoretic measure transfer entropy, we studied the properties of information transfer in an example case, namely calcium signaling under different cellular conditions. Transfer entropy is an asymmetric and dynamic measure of the dependence of two (nonlinear) stochastic processes. We used calcium signaling since it is a well-studied example of complex cellular signaling. It has been suggested that specific information is encoded in the amplitude, frequency and waveform of the oscillatory Ca2+-signal. Results: We set up a computational framework to study information transfer, e.g. for calcium signaling at different levels of activation and different particle numbers in the system. We stochastically coupled simulated and experimentally measured calcium signals to simulated target proteins and used kernel density methods to estimate the transfer entropy from these bivariate time series. We found that, most of the time, the transfer entropy increases with increasing particle numbers. In systems with only few particles, faithful information transfer is hampered by random fluctuations. The transfer entropy also seems to be slightly correlated to the complexity (spiking, bursting or irregular oscillations) of the signal. Finally, we discuss a number of peculiarities of our approach in detail. Conclusion: This study presents the first application of transfer entropy to biochemical signaling pathways. We could quantify the information transferred from simulated/experimentally measured calcium signals to a target enzyme under different cellular conditions. Our approach, comprising stochastic coupling and using the information-theoretic measure transfer entropy, could also be a valuable tool for the analysis of other signaling pathways

    From cheek swabs to consensus sequences : an A to Z protocol for high-throughput DNA sequencing of complete human mitochondrial genomes

    Get PDF
    Background: Next-generation DNA sequencing (NGS) technologies have made huge impacts in many fields of biological research, but especially in evolutionary biology. One area where NGS has shown potential is for high-throughput sequencing of complete mtDNA genomes (of humans and other animals). Despite the increasing use of NGS technologies and a better appreciation of their importance in answering biological questions, there remain significant obstacles to the successful implementation of NGS-based projects, especially for new users. Results: Here we present an ‘A to Z’ protocol for obtaining complete human mitochondrial (mtDNA) genomes – from DNA extraction to consensus sequence. Although designed for use on humans, this protocol could also be used to sequence small, organellar genomes from other species, and also nuclear loci. This protocol includes DNA extraction, PCR amplification, fragmentation of PCR products, barcoding of fragments, sequencing using the 454 GS FLX platform, and a complete bioinformatics pipeline (primer removal, reference-based mapping, output of coverage plots and SNP calling). Conclusions: All steps in this protocol are designed to be straightforward to implement, especially for researchers who are undertaking next-generation sequencing for the first time. The molecular steps are scalable to large numbers (hundreds) of individuals and all steps post-DNA extraction can be carried out in 96-well plate format. Also, the protocol has been assembled so that individual ‘modules’ can be swapped out to suit available resources

    Generalized gene co-expression analysis via subspace clustering using low-rank representation

    Get PDF
    BACKGROUND: Gene Co-expression Network Analysis (GCNA) helps identify gene modules with potential biological functions and has become a popular method in bioinformatics and biomedical research. However, most current GCNA algorithms use correlation to build gene co-expression networks and identify modules with highly correlated genes. There is a need to look beyond correlation and identify gene modules using other similarity measures for finding novel biologically meaningful modules. RESULTS: We propose a new generalized gene co-expression analysis algorithm via subspace clustering that can identify biologically meaningful gene co-expression modules with genes that are not all highly correlated. We use low-rank representation to construct gene co-expression networks and local maximal quasi-clique merger to identify gene co-expression modules. We applied our method on three large microarray datasets and a single-cell RNA sequencing dataset. We demonstrate that our method can identify gene modules with different biological functions than current GCNA methods and find gene modules with prognostic values. CONCLUSIONS: The presented method takes advantage of subspace clustering to generate gene co-expression networks rather than using correlation as the similarity measure between genes. Our generalized GCNA method can provide new insights from gene expression datasets and serve as a complement to current GCNA algorithms
    • …
    corecore