210 research outputs found

    Identifying TNF and IL6 as potential hub genes and targeted drugs associated with scleritis: A bio-informative report

    Get PDF
    BackgroundScleritis is a serious inflammatory eye disease that can lead to blindness. The etiology and pathogenesis of scleritis remain unclear, and increasing evidence indicates that some specific genes and proteins are involved. This study aimed to identify pivotal genes and drug targets for scleritis, thus providing new directions for the treatment of this disease.MethodsWe screened candidate genes and proteins associated with scleritis by text-mining the PubMed database using Python, and assessed their functions by using the DAVID database. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses were used to identify the functional enrichment of these genes and proteins. Then, the hub genes were identified with CytoHubba and assessed by protein-protein interaction (PPI) network analysis. And the serum from patients with active scleritis and healthy subjects were used for the validation of hub genes. Finally, the DGIdb database was used to predict targeted drugs for the hub genes for treating scleritis.ResultsA total of 56 genes and proteins were found to be linked to scleritis, and 65 significantly altered pathways were identified in the KEGG analysis (FDR < 0.05). Most of the top five pathways involved the categories “Rheumatoid arthritis,” “Inflammatory bowel disease”, “Type I diabetes mellitus,” and “Graft-versus-host disease”. TNF and IL6 were considered to be the top 2 hub genes through CytoHubba. Based on our serum samples, hub genes are expressed at high levels in active scleritis. Five scleritis-targeting drugs were found among 88 identified drugs.ConclusionsThis study provides key genes and drug targets related to scleritis through bioinformatics analysis. TNF and IL6 are considered key mediators and possible drug targets of scleritis. Five drug candidates may play an important role in the diagnosis and treatment of scleritis in the future, which is worthy of the further experimental and clinical study

    Random Forest in Clinical Metabolomics for Phenotypic Discrimination and Biomarker Selection

    Get PDF
    Metabolomic data analysis becomes increasingly challenging when dealing with clinical samples with diverse demographic and genetic backgrounds and various pathological conditions or treatments. Although many classification tools, such as projection to latent structures (PLS), support vector machine (SVM), linear discriminant analysis (LDA), and random forest (RF), have been successfully used in metabolomics, their performance including strengths and limitations in clinical data analysis has not been clear to researchers due to the lack of systematic evaluation of these tools. In this paper we comparatively evaluated the four classifiers, PLS, SVM, LDA, and RF, in the analysis of clinical metabolomic data derived from gas chromatography mass spectrometry platform of healthy subjects and patients diagnosed with colorectal cancer, where cross-validation, R2/Q2 plot, receiver operating characteristic curve, variable reduction, and Pearson correlation were performed. RF outperforms the other three classifiers in the given clinical data sets, highlighting its comparative advantages as a suitable classification and biomarker selection tool for clinical metabolomic data analysis

    OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization

    Full text link
    Recent work has shown that fine-tuning large pre-trained language models on a collection of tasks described via instructions, a.k.a. instruction-tuning, improves their zero and few-shot generalization to unseen tasks. However, there is a limited understanding of the performance trade-offs of different decisions made during the instruction-tuning process. These decisions include the scale and diversity of the instruction-tuning benchmark, different task sampling strategies, fine-tuning with and without demonstrations, training using specialized datasets for reasoning and dialogue, and finally, the fine-tuning objectives themselves. In this paper, we characterize the effect of instruction-tuning decisions on downstream task performance when scaling both model and benchmark sizes. To this end, we create OPT-IML Bench: a large benchmark for Instruction Meta-Learning (IML) of 2000 NLP tasks consolidated into task categories from 8 existing benchmarks, and prepare an evaluation framework to measure three types of model generalizations: to tasks from fully held-out categories, to held-out tasks from seen categories, and to held-out instances from seen tasks. Through the lens of this framework, we first present insights about instruction-tuning decisions as applied to OPT-30B and further exploit these insights to train OPT-IML 30B and 175B, which are instruction-tuned versions of OPT. OPT-IML demonstrates all three generalization abilities at both scales on four different evaluation benchmarks with diverse tasks and input formats -- PromptSource, FLAN, Super-NaturalInstructions, and UnifiedSKG. Not only does it significantly outperform OPT on all benchmarks but is also highly competitive with existing models fine-tuned on each specific benchmark. We release OPT-IML at both scales, together with the OPT-IML Bench evaluation framework.Comment: 55 page

    Cardiovascular risk and events in 17 low-, middle-, and high-income countries

    Get PDF
    BACKGROUND: More than 80% of deaths from cardiovascular disease are estimated to occur in low-income and middle-income countries, but the reasons are unknown. METHODS: We enrolled 156,424 persons from 628 urban and rural communities in 17 countries (3 high-income, 10 middle-income, and 4 low-income countries) and assessed their cardiovascular risk using the INTERHEART Risk Score, a validated score for quantifying risk-factor burden without the use of laboratory testing (with higher scores indicating greater risk-factor burden). Participants were followed for incident cardiovascular disease and death for a mean of 4.1 years. RESULTS: The mean INTERHEART Risk Score was highest in high-income countries, intermediate in middle-income countries, and lowest in low-income countries (P<0.001). However, the rates of major cardiovascular events (death from cardiovascular causes, myocardial infarction, stroke, or heart failure) were lower in high-income countries than in middle- and low-income countries (3.99 events per 1000 personyears vs. 5.38 and 6.43 events per 1000 person-years, respectively; P<0.001). Case fatality rates were also lowest in high-income countries (6.5%, 15.9%, and 17.3% in high-, middle-, and low-income countries, respectively; P = 0.01). Urban communities had a higher risk-factor burden than rural communities but lower rates of cardiovascular events (4.83 vs. 6.25 events per 1000 person-years, P<0.001) and case fatality rates (13.52% vs. 17.25%, P<0.001). The use of preventive medications and revascularization procedures was significantly more common in high-income countries than in middle- or low-income countries (P<0.001). CONCLUSIONS: Although the risk-factor burden was lowest in low-income countries, the rates of major cardiovascular disease and death were substantially higher in low-income countries than in high-income countries. The high burden of risk factors in highincome countries may have been mitigated by better control of risk factors and more frequent use of proven pharmacologic therapies and revascularization. (Funded by the Population Health Research Institute and others.)IS

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Observation of Cosmic Ray Anisotropy with Nine Years of IceCube Data

    Get PDF

    The Acoustic Module for the IceCube Upgrade

    Get PDF
    corecore