110 research outputs found

    Prognostic and Diagnostic Potential of the Structural Neuroanatomy of Depression

    Get PDF
    Background: Depression is experienced as a persistent low mood or anhedonia accompanied by behavioural and cognitive disturbances which impair day to day functioning. However, the diagnosis is largely based on self-reported symptoms, and there are no neurobiological markers to guide the choice of treatment. In the present study, we examined the prognostic and diagnostic potential of the structural neural correlates of depression.Methodology and Principal Findings: Subjects were 37 patients with major depressive disorder (mean age 43.2 years), medication-free, in an acute depressive episode, and 37 healthy individuals. Following the MRI scan, 30 patients underwent treatment with the antidepressant medication fluoxetine or cognitive behavioural therapy (CBT). Of the patients who subsequently achieved clinical remission with antidepressant medication, the whole brain structural neuroanatomy predicted 88.9% of the clinical response, prior to the initiation of treatment (88.9% patients in clinical remission (sensitivity) and 88.9% patients with residual symptoms (specificity), p = 0.01). Accuracy of the structural neuroanatomy as a diagnostic marker though was 67.6% (64.9% patients (sensitivity) and 70.3% healthy individuals (specificity), p = 0.027).Conclusions and Significance: The structural neuroanatomy of depression shows high predictive potential for clinical response to antidepressant medication, while its diagnostic potential is more limited. The present findings provide initial steps towards the development of neurobiological prognostic markers for depression

    CDD: a Conserved Domain Database for protein classification

    Get PDF
    The Conserved Domain Database (CDD) is the protein classification component of NCBI's Entrez query and retrieval system. CDD is linked to other Entrez databases such as Proteins, Taxonomy and PubMed®, and can be accessed at http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=cdd. CD-Search, which is available at http://www.ncbi.nlm.nih.gov/Structure/cdd/wrpsb.cgi, is a fast, interactive tool to identify conserved domains in new protein sequences. CD-Search results for protein sequences in Entrez are pre-computed to provide links between proteins and domain models, and computational annotation visible upon request. Protein–protein queries submitted to NCBI's BLAST search service at http://www.ncbi.nlm.nih.gov/BLAST are scanned for the presence of conserved domains by default. While CDD started out as essentially a mirror of publicly available domain alignment collections, such as SMART, Pfam and COG, we have continued an effort to update, and in some cases replace these models with domain hierarchies curated at the NCBI. Here, we report on the progress of the curation effort and associated improvements in the functionality of the CDD information retrieval system

    Genome-wide expression assay comparison across frozen and fixed postmortem brain tissue samples

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Gene expression assays have been shown to yield high quality genome-wide data from partially degraded RNA samples. However, these methods have not yet been applied to postmortem human brain tissue, despite their potential to overcome poor RNA quality and other technical limitations inherent in many assays. We compared cDNA-mediated annealing, selection, and ligation (DASL)- and <it>in vitro </it>transcription (IVT)-based genome-wide expression profiling assays on RNA samples from artificially degraded reference pools, frozen brain tissue, and formalin-fixed brain tissue.</p> <p>Results</p> <p>The DASL-based platform produced expression results of greater reliability than the IVT-based platform in artificially degraded reference brain RNA and RNA from frozen tissue-based samples. Although data associated with a small sample of formalin-fixed RNA samples were poor when obtained from both assays, the DASL-based platform exhibited greater reliability in a subset of probes and samples.</p> <p>Conclusions</p> <p>Our results suggest that the DASL-based gene expression-profiling platform may confer some advantages on mRNA assays of the brain over traditional IVT-based methods. We ultimately consider the implications of these results on investigations of neuropsychiatric disorders.</p

    AI-based dimensional neuroimaging system for characterizing heterogeneity in brain structure and function in major depressive disorder:COORDINATE-MDD consortium design and rationale

    Get PDF
    BACKGROUND: Efforts to develop neuroimaging-based biomarkers in major depressive disorder (MDD), at the individual level, have been limited to date. As diagnostic criteria are currently symptom-based, MDD is conceptualized as a disorder rather than a disease with a known etiology; further, neural measures are often confounded by medication status and heterogeneous symptom states. METHODS: We describe a consortium to quantify neuroanatomical and neurofunctional heterogeneity via the dimensions of novel multivariate coordinate system (COORDINATE-MDD). Utilizing imaging harmonization and machine learning methods in a large cohort of medication-free, deeply phenotyped MDD participants, patterns of brain alteration are defined in replicable and neurobiologically-based dimensions and offer the potential to predict treatment response at the individual level. International datasets are being shared from multi-ethnic community populations, first episode and recurrent MDD, which are medication-free, in a current depressive episode with prospective longitudinal treatment outcomes and in remission. Neuroimaging data consist of de-identified, individual, structural MRI and resting-state functional MRI with additional positron emission tomography (PET) data at specific sites. State-of-the-art analytic methods include automated image processing for extraction of anatomical and functional imaging variables, statistical harmonization of imaging variables to account for site and scanner variations, and semi-supervised machine learning methods that identify dominant patterns associated with MDD from neural structure and function in healthy participants. RESULTS: We are applying an iterative process by defining the neural dimensions that characterise deeply phenotyped samples and then testing the dimensions in novel samples to assess specificity and reliability. Crucially, we aim to use machine learning methods to identify novel predictors of treatment response based on prospective longitudinal treatment outcome data, and we can externally validate the dimensions in fully independent sites. CONCLUSION: We describe the consortium, imaging protocols and analytics using preliminary results. Our findings thus far demonstrate how datasets across many sites can be harmonized and constructively pooled to enable execution of this large-scale project

    Federated learning enables big data for rare cancer boundary detection.

    Get PDF
    Although machine learning (ML) has shown promise across disciplines, out-of-sample generalizability is concerning. This is currently addressed by sharing multi-site data, but such centralization is challenging/infeasible to scale due to various limitations. Federated ML (FL) provides an alternative paradigm for accurate and generalizable ML, by only sharing numerical model updates. Here we present the largest FL study to-date, involving data from 71 sites across 6 continents, to generate an automatic tumor boundary detector for the rare disease of glioblastoma, reporting the largest such dataset in the literature (n = 6, 314). We demonstrate a 33% delineation improvement for the surgically targetable tumor, and 23% for the complete tumor extent, over a publicly trained model. We anticipate our study to: 1) enable more healthcare studies informed by large diverse data, ensuring meaningful results for rare diseases and underrepresented populations, 2) facilitate further analyses for glioblastoma by releasing our consensus model, and 3) demonstrate the FL effectiveness at such scale and task-complexity as a paradigm shift for multi-site collaborations, alleviating the need for data-sharing

    Author Correction: Federated learning enables big data for rare cancer boundary detection.

    Get PDF
    10.1038/s41467-023-36188-7NATURE COMMUNICATIONS14

    Large expert-curated database for benchmarking document similarity detection in biomedical literature search

    Get PDF
    Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries, who have collectively annotated the relevance of over 180 000 PubMed-listed articles with regard to their respective seed (input) article/s. The majority of annotations were contributed by highly experienced, original authors of the seed articles. The collected data cover 76% of all unique PubMed Medical Subject Headings descriptors. No systematic biases were observed across different experience levels, research fields or time spent on annotations. More importantly, annotations of the same document pairs contributed by different scientists were highly concordant. We further show that the three representative baseline methods used to generate recommended articles for evaluation (Okapi Best Matching 25, Term Frequency-Inverse Document Frequency and PubMed Related Articles) had similar overall performances. Additionally, we found that these methods each tend to produce distinct collections of recommended articles, suggesting that a hybrid method may be required to completely capture all relevant articles. The established database server located at https://relishdb.ict.griffith.edu.au is freely available for the downloading of annotation data and the blind testing of new methods. We expect that this benchmark will be useful for stimulating the development of new powerful techniques for title and title/abstract-based search engines for relevant articles in biomedical research.Peer reviewe

    The James Webb Space Telescope Mission

    Full text link
    Twenty-six years ago a small committee report, building on earlier studies, expounded a compelling and poetic vision for the future of astronomy, calling for an infrared-optimized space telescope with an aperture of at least 4m4m. With the support of their governments in the US, Europe, and Canada, 20,000 people realized that vision as the 6.5m6.5m James Webb Space Telescope. A generation of astronomers will celebrate their accomplishments for the life of the mission, potentially as long as 20 years, and beyond. This report and the scientific discoveries that follow are extended thank-you notes to the 20,000 team members. The telescope is working perfectly, with much better image quality than expected. In this and accompanying papers, we give a brief history, describe the observatory, outline its objectives and current observing program, and discuss the inventions and people who made it possible. We cite detailed reports on the design and the measured performance on orbit.Comment: Accepted by PASP for the special issue on The James Webb Space Telescope Overview, 29 pages, 4 figure

    Finishing the euchromatic sequence of the human genome

    Get PDF
    The sequence of the human genome encodes the genetic instructions for human physiology, as well as rich information about human evolution. In 2001, the International Human Genome Sequencing Consortium reported a draft sequence of the euchromatic portion of the human genome. Since then, the international collaboration has worked to convert this draft into a genome sequence with high accuracy and nearly complete coverage. Here, we report the result of this finishing process. The current genome sequence (Build 35) contains 2.85 billion nucleotides interrupted by only 341 gaps. It covers ∼99% of the euchromatic genome and is accurate to an error rate of ∼1 event per 100,000 bases. Many of the remaining euchromatic gaps are associated with segmental duplications and will require focused work with new methods. The near-complete sequence, the first for a vertebrate, greatly improves the precision of biological analyses of the human genome including studies of gene number, birth and death. Notably, the human enome seems to encode only 20,000-25,000 protein-coding genes. The genome sequence reported here should serve as a firm foundation for biomedical research in the decades ahead
    corecore