110 research outputs found

    AlzPathway: a comprehensive map of signaling pathways of Alzheimer’s disease

    Get PDF
    BACKGROUND: Alzheimer’s disease (AD) is the most common cause of dementia among the elderly. To clarify pathogenesis of AD, thousands of reports have been accumulating. However, knowledge of signaling pathways in the field of AD has not been compiled as a database before. DESCRIPTION: Here, we have constructed a publicly available pathway map called “AlzPathway” that comprehensively catalogs signaling pathways in the field of AD. We have collected and manually curated over 100 review articles related to AD, and have built an AD pathway map using CellDesigner. AlzPathway is currently composed of 1347 molecules and 1070 reactions in neuron, brain blood barrier, presynaptic, postsynaptic, astrocyte, and microglial cells and their cellular localizations. AlzPathway is available as both the SBML (Systems Biology Markup Language) map for CellDesigner and the high resolution image map. AlzPathway is also available as a web service (online map) based on Payao system, a community-based, collaborative web service platform for pathway model curation, enabling continuous updates by AD researchers. CONCLUSIONS: AlzPathway is the first comprehensive map of intra, inter and extra cellular AD signaling pathways which can enable mechanistic deciphering of AD pathogenesis. The AlzPathway map is accessible at http://alzpathway.org/

    Using spin to understand the formation of LIGO's black holes

    Full text link
    With the detection of four candidate binary black hole (BBH) mergers by the Advanced LIGO detectors thus far, it is becoming possible to constrain the properties of the BBH merger population in order to better understand the formation of these systems. Black hole (BH) spin orientations are one of the cleanest discriminators of formation history, with BHs in dynamically formed binaries in dense stellar environments expected to have spins distributed isotropically, in contrast to isolated populations where stellar evolution is expected to induce BH spins preferentially aligned with the orbital angular momentum. In this work we propose a simple, model-agnostic approach to characterizing the spin properties of LIGO's BBH population. Using measurements of the effective spin of the binaries, which is LIGO's best constrained spin parameter, we introduce a simple parameter to quantify the fraction of the population that is isotropically distributed, regardless of the spin magnitude distribution of the population. Once the orientation characteristics of the population have been determined, we show how measurements of effective spin can be used to directly constrain the underlying BH spin magnitude distribution. Although we find that the majority of the current effective spin measurements are too small to be informative, with LIGO's four BBH candidates we find a slight preference for an underlying population with aligned spins over one with isotropic spins (with an odds ratio of 1.1). We argue that it will be possible to distinguish symmetric and anti-symmetric populations at high confidence with tens of additional detections, although mixed populations may take significantly more detections to disentangle. We also derive preliminary spin magnitude distributions for LIGO's black holes, under the assumption of aligned or isotropic populations

    Combined In Silico and In Vivo Analyses Reveal Role of Hes1 in Taste Cell Differentiation

    Get PDF
    The sense of taste is of critical importance to animal survival. Although studies of taste signal transduction mechanisms have provided detailed information regarding taste receptor calcium signaling molecules (TRCSMs, required for sweet/bitter/umami taste signal transduction), the ontogeny of taste cells is still largely unknown. We used a novel approach to investigate the molecular regulation of taste system development in mice by combining in silico and in vivo analyses. After discovering that TRCSMs colocalized within developing circumvallate papillae (CVP), we used computational analysis of the upstream regulatory regions of TRCSMs to investigate the possibility of a common regulatory network for TRCSM transcription. Based on this analysis, we identified Hes1 as a likely common regulatory factor, and examined its function in vivo. Expression profile analyses revealed that decreased expression of nuclear HES1 correlated with expression of type II taste cell markers. After stage E18, the CVP of Hes1−/− mutants displayed over 5-fold more TRCSM-immunoreactive cells than did the CVP of their wild-type littermates. Thus, according to our composite analyses, Hes1 is likely to play a role in orchestrating taste cell differentiation in developing taste buds

    Hub-Centered Gene Network Reconstruction Using Automatic Relevance Determination

    Get PDF
    Network inference deals with the reconstruction of biological networks from experimental data. A variety of different reverse engineering techniques are available; they differ in the underlying assumptions and mathematical models used. One common problem for all approaches stems from the complexity of the task, due to the combinatorial explosion of different network topologies for increasing network size. To handle this problem, constraints are frequently used, for example on the node degree, number of edges, or constraints on regulation functions between network components. We propose to exploit topological considerations in the inference of gene regulatory networks. Such systems are often controlled by a small number of hub genes, while most other genes have only limited influence on the network's dynamic. We model gene regulation using a Bayesian network with discrete, Boolean nodes. A hierarchical prior is employed to identify hub genes. The first layer of the prior is used to regularize weights on edges emanating from one specific node. A second prior on hyperparameters controls the magnitude of the former regularization for different nodes. The net effect is that central nodes tend to form in reconstructed networks. Network reconstruction is then performed by maximization of or sampling from the posterior distribution. We evaluate our approach on simulated and real experimental data, indicating that we can reconstruct main regulatory interactions from the data. We furthermore compare our approach to other state-of-the art methods, showing superior performance in identifying hubs. Using a large publicly available dataset of over 800 cell cycle regulated genes, we are able to identify several main hub genes. Our method may thus provide a valuable tool to identify interesting candidate genes for further study. Furthermore, the approach presented may stimulate further developments in regularization methods for network reconstruction from data

    The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The interaction between biological researchers and the bioinformatics tools they use is still hampered by incomplete interoperability between such tools. To ensure interoperability initiatives are effectively deployed, end-user applications need to be aware of, and support, best practices and standards. Here, we report on an initiative in which software developers and genome biologists came together to explore and raise awareness of these issues: BioHackathon 2009.</p> <p>Results</p> <p>Developers in attendance came from diverse backgrounds, with experts in Web services, workflow tools, text mining and visualization. Genome biologists provided expertise and exemplar data from the domains of sequence and pathway analysis and glyco-informatics. One goal of the meeting was to evaluate the ability to address real world use cases in these domains using the tools that the developers represented. This resulted in i) a workflow to annotate 100,000 sequences from an invertebrate species; ii) an integrated system for analysis of the transcription factor binding sites (TFBSs) enriched based on differential gene expression data obtained from a microarray experiment; iii) a workflow to enumerate putative physical protein interactions among enzymes in a metabolic pathway using protein structure data; iv) a workflow to analyze glyco-gene-related diseases by searching for human homologs of glyco-genes in other species, such as fruit flies, and retrieving their phenotype-annotated SNPs.</p> <p>Conclusions</p> <p>Beyond deriving prototype solutions for each use-case, a second major purpose of the BioHackathon was to highlight areas of insufficiency. We discuss the issues raised by our exploration of the problem/solution space, concluding that there are still problems with the way Web services are modeled and annotated, including: i) the absence of several useful data or analysis functions in the Web service "space"; ii) the lack of documentation of methods; iii) lack of compliance with the SOAP/WSDL specification among and between various programming-language libraries; and iv) incompatibility between various bioinformatics data formats. Although it was still difficult to solve real world problems posed to the developers by the biological researchers in attendance because of these problems, we note the promise of addressing these issues within a semantic framework.</p

    The 3rd DBCLS BioHackathon: improving life science data integration with Semantic Web technologies.

    Get PDF
    BACKGROUND: BioHackathon 2010 was the third in a series of meetings hosted by the Database Center for Life Sciences (DBCLS) in Tokyo, Japan. The overall goal of the BioHackathon series is to improve the quality and accessibility of life science research data on the Web by bringing together representatives from public databases, analytical tool providers, and cyber-infrastructure researchers to jointly tackle important challenges in the area of in silico biological research. RESULTS: The theme of BioHackathon 2010 was the 'Semantic Web', and all attendees gathered with the shared goal of producing Semantic Web data from their respective resources, and/or consuming or interacting those data using their tools and interfaces. We discussed on topics including guidelines for designing semantic data and interoperability of resources. We consequently developed tools and clients for analysis and visualization. CONCLUSION: We provide a meeting report from BioHackathon 2010, in which we describe the discussions, decisions, and breakthroughs made as we moved towards compliance with Semantic Web technologies - from source provider, through middleware, to the end-consumer.RIGHTS : This article is licensed under the BioMed Central licence at http://www.biomedcentral.com/about/license which is similar to the 'Creative Commons Attribution Licence'. In brief you may : copy, distribute, and display the work; make derivative works; or make commercial use of the work - under the following conditions: the original author must be given credit; for any reuse or distribution, it must be made clear to others what the license terms of this work are

    BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains

    Get PDF
    The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed

    The Human Phenotype Ontology in 2024: phenotypes around the world.

    Get PDF
    The Human Phenotype Ontology (HPO) is a widely used resource that comprehensively organizes and defines the phenotypic features of human disease, enabling computational inference and supporting genomic and phenotypic analyses through semantic similarity and machine learning algorithms. The HPO has widespread applications in clinical diagnostics and translational research, including genomic diagnostics, gene-disease discovery, and cohort analytics. In recent years, groups around the world have developed translations of the HPO from English to other languages, and the HPO browser has been internationalized, allowing users to view HPO term labels and in many cases synonyms and definitions in ten languages in addition to English. Since our last report, a total of 2239 new HPO terms and 49235 new HPO annotations were developed, many in collaboration with external groups in the fields of psychiatry, arthrogryposis, immunology and cardiology. The Medical Action Ontology (MAxO) is a new effort to model treatments and other measures taken for clinical management. Finally, the HPO consortium is contributing to efforts to integrate the HPO and the GA4GH Phenopacket Schema into electronic health records (EHRs) with the goal of more standardized and computable integration of rare disease data in EHRs

    GA4GH: International policies and standards for data sharing across genomic research and healthcare.

    Get PDF
    The Global Alliance for Genomics and Health (GA4GH) aims to accelerate biomedical advances by enabling the responsible sharing of clinical and genomic data through both harmonized data aggregation and federated approaches. The decreasing cost of genomic sequencing (along with other genome-wide molecular assays) and increasing evidence of its clinical utility will soon drive the generation of sequence data from tens of millions of humans, with increasing levels of diversity. In this perspective, we present the GA4GH strategies for addressing the major challenges of this data revolution. We describe the GA4GH organization, which is fueled by the development efforts of eight Work Streams and informed by the needs of 24 Driver Projects and other key stakeholders. We present the GA4GH suite of secure, interoperable technical standards and policy frameworks and review the current status of standards, their relevance to key domains of research and clinical care, and future plans of GA4GH. Broad international participation in building, adopting, and deploying GA4GH standards and frameworks will catalyze an unprecedented effort in data sharing that will be critical to advancing genomic medicine and ensuring that all populations can access its benefits
    corecore