524 research outputs found

    Microbiome profiling by Illumina sequencing of combinatorial sequence-tagged PCR products

    Get PDF
    We developed a low-cost, high-throughput microbiome profiling method that uses combinatorial sequence tags attached to PCR primers that amplify the rRNA V6 region. Amplified PCR products are sequenced using an Illumina paired-end protocol to generate millions of overlapping reads. Combinatorial sequence tagging can be used to examine hundreds of samples with far fewer primers than is required when sequence tags are incorporated at only a single end. The number of reads generated permitted saturating or near-saturating analysis of samples of the vaginal microbiome. The large number of reads al- lowed an in-depth analysis of errors, and we found that PCR-induced errors composed the vast majority of non-organism derived species variants, an ob- servation that has significant implications for sequence clustering of similar high-throughput data. We show that the short reads are sufficient to assign organisms to the genus or species level in most cases. We suggest that this method will be useful for the deep sequencing of any short nucleotide region that is taxonomically informative; these include the V3, V5 regions of the bac- terial 16S rRNA genes and the eukaryotic V9 region that is gaining popularity for sampling protist diversity.Comment: 28 pages, 13 figure

    Source Evaluation and Trace Metal Contamination in Benthic Sediments from Equatorial Ecosystems Using Multivariate Statistical Techniques

    Get PDF
    race metals (Cd, Cr, Cu, Ni and Pb) concentrations in benthic sediments were analyzed through multi-step fractionation scheme to assess the levels and sources of contamination in estuarine, riverine and freshwater ecosystems in Niger Delta (Nigeria). The degree of contamination was assessed using the individual contamination factors (ICF) and global contamination factor (GCF). Multivariate statistical approaches including principal component analysis (PCA), cluster analysis and correlation test were employed to evaluate the interrelationships and associated sources of contamination. The spatial distribution of metal concentrations followed the pattern Pb>Cu>Cr>Cd>Ni. Ecological risk index by ICF showed significant potential mobility and bioavailability for Cu, Cu and Ni. The ICF contamination trend in the benthic sediments at all studied sites was Cu>Cr>Ni>Cd>Pb. The principal component and agglomerative clustering analyses indicate that trace metals contamination in the ecosystems was influenced by multiple pollution sources

    Prospects and challenges of environmental DNA (eDNA) monitoring in freshwater ponds

    Get PDF
    Environmental DNA (eDNA) analysis is a rapid, non-invasive, cost-efficient biodiversity monitoring tool with enormous potential to inform aquatic conservation and management. Development is ongoing, with strong commercial interest, and new uses are continually being discovered. General applications of eDNA and guidelines for best practice in freshwater systems have been established, but habitat-specific assessments are lacking. Ponds are highly diverse, yet understudied systems that could benefit from eDNA monitoring. However, eDNA applications in ponds and methodological constraints specific to these environments remain unaddressed. Following a stakeholder workshop in 2017, researchers combined knowledge and expertise to review these applications and challenges that must be addressed for the future and consistency of eDNA monitoring in ponds. The greatest challenges for pond eDNA surveys are representative sampling, eDNA capture, and potential PCR inhibition. We provide recommendations for sampling, eDNA capture, inhibition testing, and laboratory practice, which should aid new and ongoing eDNA projects in ponds. If implemented, these recommendations will contribute towards an eventual broad standardisation of eDNA research and practice, with room to tailor workflows for optimal analysis and different applications. Such standardisation will provide more robust, comparable, and ecologically meaningful data to enable effective conservation and management of pond biodiversity

    Large Scale Structure of the Universe

    Full text link
    Galaxies are not uniformly distributed in space. On large scales the Universe displays coherent structure, with galaxies residing in groups and clusters on scales of ~1-3 Mpc/h, which lie at the intersections of long filaments of galaxies that are >10 Mpc/h in length. Vast regions of relatively empty space, known as voids, contain very few galaxies and span the volume in between these structures. This observed large scale structure depends both on cosmological parameters and on the formation and evolution of galaxies. Using the two-point correlation function, one can trace the dependence of large scale structure on galaxy properties such as luminosity, color, stellar mass, and track its evolution with redshift. Comparison of the observed galaxy clustering signatures with dark matter simulations allows one to model and understand the clustering of galaxies and their formation and evolution within their parent dark matter halos. Clustering measurements can determine the parent dark matter halo mass of a given galaxy population, connect observed galaxy populations at different epochs, and constrain cosmological parameters and galaxy evolution models. This chapter describes the methods used to measure the two-point correlation function in both redshift and real space, presents the current results of how the clustering amplitude depends on various galaxy properties, and discusses quantitative measurements of the structures of voids and filaments. The interpretation of these results with current theoretical models is also presented.Comment: Invited contribution to be published in Vol. 8 of book "Planets, Stars, and Stellar Systems", Springer, series editor T. D. Oswalt, volume editor W. C. Keel, v2 includes additional references, updated to match published versio

    Serum proteome analysis for profiling protein markers associated with carcinogenesis and lymph node metastasis in nasopharyngeal carcinoma

    Get PDF
    Nasopharyngeal carcinoma (NPC), one of the most common cancers in population with Chinese or Asian progeny, poses a serious health problem for southern China. It is unfortunate that most NPC victims have had lymph node metastasis (LNM) when first diagnosed. We believe that the 2D based serum proteome analysis can be useful in discovering new biomarkers that may aid in the diagnosis and therapy of NPC patients. To filter the tumor specific antigen markers of NPC, sera from 42 healthy volunteers, 27 non-LNM NPC patients and 37 LNM NPC patients were selected for screening study using 2D combined with MS. Pretreatment strategy, including sonication, albumin and immunoglobulin G (IgG) depletion, was adopted for screening differentially expressed proteins of low abundance in serum. By 2D image analysis and MALDI-TOF-MS identification, twenty-three protein spots were differentially expressed. Three of them were further validated in the sera using enzyme-linked immunosorbent assay (ELISA). Our research demonstrates that HSP70, sICAM-1 and SAA, confirmed with ELISA at sera and immunohistochemistry, are potential NPC metastasis-specific serum biomarkers which may be of great underlying significance in clinical detection and management of NPC

    An Introduction to RNA Databases

    Full text link
    We present an introduction to RNA databases. The history and technology behind RNA databases is briefly discussed. We examine differing methods of data collection and curation, and discuss their impact on both the scope and accuracy of the resulting databases. Finally, we demonstrate these principals through detailed examination of four leading RNA databases: Noncode, miRBase, Rfam, and SILVA.Comment: 27 pages, 10 figures, 1 tables. Submitted as a chapter for "An introduction to RNA bioinformatics" to be published by "Methods in Molecular Biology

    Integrative modeling of transcriptional regulation in response to antirheumatic therapy

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The investigation of gene regulatory networks is an important issue in molecular systems biology and significant progress has been made by combining different types of biological data. The purpose of this study was to characterize the transcriptional program induced by etanercept therapy in patients with rheumatoid arthritis (RA). Etanercept is known to reduce disease symptoms and progression in RA, but the underlying molecular mechanisms have not been fully elucidated.</p> <p>Results</p> <p>Using a DNA microarray dataset providing genome-wide expression profiles of 19 RA patients within the first week of therapy we identified significant transcriptional changes in 83 genes. Most of these genes are known to control the human body's immune response. A novel algorithm called TILAR was then applied to construct a linear network model of the genes' regulatory interactions. The inference method derives a model from the data based on the Least Angle Regression while incorporating DNA-binding site information. As a result we obtained a scale-free network that exhibits a self-regulating and highly parallel architecture, and reflects the pleiotropic immunological role of the therapeutic target TNF-alpha. Moreover, we could show that our integrative modeling strategy performs much better than algorithms using gene expression data alone.</p> <p>Conclusion</p> <p>We present TILAR, a method to deduce gene regulatory interactions from gene expression data by integrating information on transcription factor binding sites. The inferred network uncovers gene regulatory effects in response to etanercept and thus provides useful hypotheses about the drug's mechanisms of action.</p
    corecore