11 research outputs found

    Therapeutic Discovery for Friedreich Ataxia Using Random shRNA Selection

    Get PDF
    We screened a 300,000-clone, random shRNA-expressing library and identified shRNA sequences that reverse the decreased growth/survival phenotype of primary Friedreich ataxia (FA) fibroblasts grown in mitochondrial stress media. One of the hit sequences, gFA2, increases frataxin expression ~2 fold, either as a vector-expressed shRNA or as a transfected siRNA. We randomly mutagenized gFA2 to create a gFA2 variant sub-library. We screened this sub-library in primary FA fibroblasts and identified two gFA2 variants, gFA2.8 and gFA2.10, that further increase frataxin expression. Microarray analyses of primary FA fibroblasts expressing another hit shRNA, gFA11, revealed alterations in ~350 mRNAs. Bioinformatic pathway analyses indicated significant changes in mRNAs involved in cytokine secretion; we confirmed significant changes in cytokine secretion induced by gFA11 biochemically. Ingenuity Pathway Analysis revealed that inhibition of a known transcription factor, or treatment of cells with a previously studied chemical compound, induced a statistically similar pattern of gene expression to that induced by gFA11. Inhibition of the transcription factor using a directed siRNA in primary FA fibroblasts, as well as treatment of the cells with the chemical compound, recapitulated the phenotype induced by gFA11, namely reversal of decreased growth/survival in mitochondrial stress media. We are currently planning similar microarray and bioinformatics analyses of the optimized versions of gFA2. Combined with microarray analyses and bioinformatic pattern-matching, our random, shRNA library screens potentially yield, 1) small-RNA therapeutic candidates, 2) conventional chemical-compound therapeutic candidates, 3) drug-target candidates, and 4) elucidation of disease mechanisms, which may inform additional therapeutic initiatives

    Functional genomics of the beta-cell: short-chain 3-hydroxyacyl-coenzyme A dehydrogenase regulates insulin secretion independent of K+ currents

    Get PDF
    Recent advances in functional genomics afford the opportunity to interrogate the expression profiles of thousands of genes simultaneously and examine the function of these genes in a high-throughput manner. In this study, we describe a rational and efficient approach to identifying novel regulators of insulin secretion by the pancreatic beta-cell. Computational analysis of expression profiles of several mouse and cellular models of impaired insulin secretion identified 373 candidate genes involved in regulation of insulin secretion. Using RNA interference, we assessed the requirements of 10 of these candidates and identified four genes (40%) as being essential for normal insulin secretion. Among the genes identified was Hadhsc, which encodes short-chain 3-hydroxyacyl-coenzyme A dehydrogenase (SCHAD), an enzyme of mitochondrial beta-oxidation of fatty acids whose mutation results in congenital hyperinsulinism. RNA interference-mediated gene suppression of Hadhsc in insulinoma cells and primary rodent islets revealed enhanced basal but normal glucose-stimulated insulin secretion. This increase in basal insulin secretion was not attenuated by the opening of the KATP channel with diazoxide, suggesting that SCHAD regulates insulin secretion through a KATP channel-independent mechanism. Our results suggest a molecular explanation for the hyperinsulinemia hypoglycemic seen in patients with SCHAD deficiency

    Navigating Public Microarray Databases

    Get PDF
    With the ever-escalating amount of data being produced by genome-wide microarray studies, it is of increasing importance that these data are captured in public databases so that researchers can use this information to complement and enhance their own studies. Many groups have set up databases of expression data, ranging from large repositories, which are designed to comprehensively capture all published data, through to more specialized databases. The public repositories, such as ArrayExpress at the European Bioinformatics Institute contain complete datasets in raw format in addition to processed data, whilst the specialist databases tend to provide downstream analysis of normalized data from more focused studies and data sources. Here we provide a guide to the use of these public microarray resources

    CLO: The cell line ontology

    Get PDF
    Abstract Background Cell lines have been widely used in biomedical research. The community-based Cell Line Ontology (CLO) is a member of the OBO Foundry library that covers the domain of cell lines. Since its publication two years ago, significant updates have been made, including new groups joining the CLO consortium, new cell line cells, upper level alignment with the Cell Ontology (CL) and the Ontology for Biomedical Investigation, and logical extensions. Construction and content Collaboration among the CLO, CL, and OBI has established consensus definitions of cell line-specific terms such as ‘cell line’, ‘cell line cell’, ‘cell line culturing’, and ‘mortal’ vs. ‘immortal cell line cell’. A cell line is a genetically stable cultured cell population that contains individual cell line cells. The hierarchical structure of the CLO is built based on the hierarchy of the in vivo cell types defined in CL and tissue types (from which cell line cells are derived) defined in the UBERON cross-species anatomy ontology. The new hierarchical structure makes it easier to browse, query, and perform automated classification. We have recently added classes representing more than 2,000 cell line cells from the RIKEN BRC Cell Bank to CLO. Overall, the CLO now contains ~38,000 classes of specific cell line cells derived from over 200 in vivo cell types from various organisms. Utility and discussion The CLO has been applied to different biomedical research studies. Example case studies include annotation and analysis of EBI ArrayExpress data, bioassays, and host-vaccine/pathogen interaction. CLO’s utility goes beyond a catalogue of cell line types. The alignment of the CLO with related ontologies combined with the use of ontological reasoners will support sophisticated inferencing to advance translational informatics development.http://deepblue.lib.umich.edu/bitstream/2027.42/109554/1/13326_2013_Article_185.pd

    OBO Foundry in 2021: Operationalizing Open Data Principles to Evaluate Ontologies

    Get PDF
    Biological ontologies are used to organize, curate, and interpret the vast quantities of data arising from biological experiments. While this works well when using a single ontology, integrating multiple ontologies can be problematic, as they are developed independently, which can lead to incompatibilities. The Open Biological and Biomedical Ontologies Foundry was created to address this by facilitating the development, harmonization, application, and sharing of ontologies, guided by a set of overarching principles. One challenge in reaching these goals was that the OBO principles were not originally encoded in a precise fashion, and interpretation was subjective. Here we show how we have addressed this by formally encoding the OBO principles as operational rules and implementing a suite of automated validation checks and a dashboard for objectively evaluating each ontology’s compliance with each principle. This entailed a substantial effort to curate metadata across all ontologies and to coordinate with individual stakeholders. We have applied these checks across the full OBO suite of ontologies, revealing areas where individual ontologies require changes to conform to our principles. Our work demonstrates how a sizable federated community can be organized and evaluated on objective criteria that help improve overall quality and interoperability, which is vital for the sustenance of the OBO project and towards the overall goals of making data FAIR. Competing Interest StatementThe authors have declared no competing interest

    Modelling kidney disease using ontology: insights from the Kidney Precision Medicine Project.

    No full text
    An important need exists to better understand and stratify kidney disease according to its underlying pathophysiology in order to develop more precise and effective therapeutic agents. National collaborative efforts such as the Kidney Precision Medicine Project are working towards this goal through the collection and integration of large, disparate clinical, biological and imaging data from patients with kidney disease. Ontologies are powerful tools that facilitate these efforts by enabling researchers to organize and make sense of different data elements and the relationships between them. Ontologies are critical to support the types of big data analysis necessary for kidney precision medicine, where heterogeneous clinical, imaging and biopsy data from diverse sources must be combined to define a patient\u27s phenotype. The development of two new ontologies - the Kidney Tissue Atlas Ontology and the Ontology of Precision Medicine and Investigation - will support the creation of the Kidney Tissue Atlas, which aims to provide a comprehensive molecular, cellular and anatomical map of the kidney. These ontologies will improve the annotation of kidney-relevant data, and eventually lead to new definitions of kidney disease in support of precision medicine

    Promoting Coherent Minimum Reporting Guidelines for Biological and Biomedical Investigations: the MIBBI Project

    No full text
    To fully understand the context, methods, data and conclusions that pertain to an experiment, one must have access to a range of background information. However, the current diversity of experimental designs and analytical techniques complicates the discovery and evaluation of experimental data; furthermore, the increasing rate of production of those data compounds the problem. Community opinion increasingly favors that a regularized set of the available metadata ('data about the data') pertaining to an experiment1, 2 be associated with the results, making explicit both the biological and methodological contexts. Many journals and funding agencies now require that authors reporting microarray-based transcriptomics experiments comply with the Minimum Information about a Microarray Experiment (MIAME) checklist3 as a prerequisite for publication4, 5, 6, 7. Similarly, minimum information guidelines for reporting proteomics experiments and describing systems biology models are gaining broader support in their respective database communities8, 9; and progress is being made toward the standardization of the reporting of clinical trials in the medical literature10. Such minimum information checklists promote transparency in experimental reporting, enhance accessibility to data and support effective quality assessment, increasing the general value of a body of work (and the competitiveness of the originators).This article is from Nature Biotechnology 26 (2008): 889, doi:10.1038/nbt.1411.</p
    corecore