218 research outputs found

    Calibration of the SNO+ experiment

    Get PDF
    The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of (130)Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.Peer Reviewe

    Automated Fidelity Assessment for Strategy Training in Inpatient Rehabilitation using Natural Language Processing

    Full text link
    Strategy training is a multidisciplinary rehabilitation approach that teaches skills to reduce disability among those with cognitive impairments following a stroke. Strategy training has been shown in randomized, controlled clinical trials to be a more feasible and efficacious intervention for promoting independence than traditional rehabilitation approaches. A standardized fidelity assessment is used to measure adherence to treatment principles by examining guided and directed verbal cues in video recordings of rehabilitation sessions. Although the fidelity assessment for detecting guided and directed verbal cues is valid and feasible for single-site studies, it can become labor intensive, time consuming, and expensive in large, multi-site pragmatic trials. To address this challenge to widespread strategy training implementation, we leveraged natural language processing (NLP) techniques to automate the strategy training fidelity assessment, i.e., to automatically identify guided and directed verbal cues from video recordings of rehabilitation sessions. We developed a rule-based NLP algorithm, a long-short term memory (LSTM) model, and a bidirectional encoder representation from transformers (BERT) model for this task. The best performance was achieved by the BERT model with a 0.8075 F1-score. This BERT model was verified on an external validation dataset collected from a separate major regional health system and achieved an F1 score of 0.8259, which shows that the BERT model generalizes well. The findings from this study hold widespread promise in psychology and rehabilitation intervention research and practice.Comment: Accepted at the AMIA Informatics Summit 202

    Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting

    Get PDF
    Advances in artificial intelligence have cultivated a strong interest in developing and validating the clinical utilities of computer-aided diagnostic models. Machine learning for diagnostic neuroimaging has often been applied to detect psychological and neurological disorders, typically on small-scale datasets or data collected in a research setting. With the collection and collation of an ever-growing number of public datasets that researchers can freely access, much work has been done in adapting machine learning models to classify these neuroimages by diseases such as Alzheimer’s, ADHD, autism, bipolar disorder, and so on. These studies often come with the promise of being implemented clinically, but despite intense interest in this topic in the laboratory, limited progress has been made in clinical implementation. In this review, we analyze challenges specific to the clinical implementation of diagnostic AI models for neuroimaging data, looking at the differences between laboratory and clinical settings, the inherent limitations of diagnostic AI, and the different incentives and skill sets between research institutions, technology companies, and hospitals. These complexities need to be recognized in the translation of diagnostic AI for neuroimaging from the laboratory to the clinic.</p

    The long noncoding RNA lncNB1 promotes tumorigenesis by interacting with ribosomal protein RPL35

    Get PDF
    The majority of patients with neuroblastoma due to MYCN oncogene amplification and consequent N-Myc oncoprotein over-expression die of the disease. Here our analyses of RNA sequencing data identify the long noncoding RNA lncNB1 as one of the transcripts most over-expressed in MYCN-amplified, compared with MYCN-non-amplified, human neuroblastoma cells and also the&nbsp;most over-expressed in neuroblastoma compared with all other cancers. lncNB1 binds to the ribosomal protein RPL35 to enhance E2F1 protein synthesis, leading to DEPDC1B gene transcription. The GTPase-activating protein DEPDC1B induces ERK protein phosphorylation and N-Myc protein stabilization. Importantly, lncNB1 knockdown abolishes neuroblastoma cell clonogenic capacity in vitro and leads to neuroblastoma tumor regression in mice, while high levels of lncNB1 and RPL35 in human neuroblastoma tissues predict poor patient prognosis. This study therefore identifies lncNB1 and its binding protein RPL35 as key factors for promoting E2F1 protein synthesis, N-Myc protein stability and N-Myc-driven oncogenesis, and as therapeutic targets

    Revealing the missing expressed genes beyond the human reference genome by RNA-Seq

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The complete and accurate human reference genome is important for functional genomics researches. Therefore, the incomplete reference genome and individual specific sequences have significant effects on various studies.</p> <p>Results</p> <p>we used two RNA-Seq datasets from human brain tissues and 10 mixed cell lines to investigate the completeness of human reference genome. First, we demonstrated that in previously identified ~5 Mb Asian and ~5 Mb African novel sequences that are absent from the human reference genome of NCBI build 36, ~211 kb and ~201 kb of them could be transcribed, respectively. Our results suggest that many of those transcribed regions are not specific to Asian and African, but also present in Caucasian. Then, we found that the expressions of 104 RefSeq genes that are unalignable to NCBI build 37 in brain and cell lines are higher than 0.1 RPKM. 55 of them are conserved across human, chimpanzee and macaque, suggesting that there are still a significant number of functional human genes absent from the human reference genome. Moreover, we identified hundreds of novel transcript contigs that cannot be aligned to NCBI build 37, RefSeq genes and EST sequences. Some of those novel transcript contigs are also conserved among human, chimpanzee and macaque. By positioning those contigs onto the human genome, we identified several large deletions in the reference genome. Several conserved novel transcript contigs were further validated by RT-PCR.</p> <p>Conclusion</p> <p>Our findings demonstrate that a significant number of genes are still absent from the incomplete human reference genome, highlighting the importance of further refining the human reference genome and curating those missing genes. Our study also shows the importance of <it>de novo </it>transcriptome assembly. The comparative approach between reference genome and other related human genomes based on the transcriptome provides an alternative way to refine the human reference genome.</p

    Gene Expression Profiles Distinguish the Carcinogenic Effects of Aristolochic Acid in Target (Kidney) and Non-target (Liver) Tissues in Rats

    Get PDF
    BACKGROUND: Aristolochic acid (AA) is the active component of herbal drugs derived from Aristolochia species that have been used for medicinal purposes since antiquity. AA, however, induced nephropathy and urothelial cancer in people and malignant tumors in the kidney and urinary tract of rodents. Although AA is bioactivated in both kidney and liver, it only induces tumors in kidney. To evaluate whether microarray analysis can be used for distinguishing the tissue-specific carcinogenicity of AA, we examined gene expression profiles in kidney and liver of rats treated with carcinogenic doses of AA. RESULTS: Microarray analysis was performed using the Rat Genome Survey Microarray and data analysis was carried out within ArrayTrack software. Principal components analysis and hierarchical cluster analysis of the expression profiles showed that samples were grouped together according to the tissues and treatments. The gene expression profiles were significantly altered by AA treatment in both kidney and liver (p < 0.01; fold change > 1.5). Functional analysis with Ingenuity Pathways Analysis showed that there were many more significantly altered genes involved in cancer-related pathways in kidney than in liver. Also, analysis with Gene Ontology for Functional Analysis (GOFFA) software indicated that the biological processes related to defense response, apoptosis and immune response were significantly altered by AA exposure in kidney, but not in liver. CONCLUSION: Our results suggest that microarray analysis is a useful tool for detecting AA exposure; that analysis of the gene expression profiles can define the differential responses to toxicity and carcinogenicity of AA from kidney and liver; and that significant alteration of genes associated with defense response, apoptosis and immune response in kidney, but not in liver, may be responsible for the tissue-specific toxicity and carcinogenicity of AA

    Scapegoat: John Dewey and the character education crisis

    Get PDF
    Many conservatives, including some conservative scholars, blame the ideas and influence of John Dewey for what has frequently been called a crisis of character, a catastrophic decline in moral behavior in the schools and society of North America. Dewey’s critics claim that he is responsible for the undermining of the kinds of instruction that could lead to the development of character and the strengthening of the will, and that his educational philosophy and example exert a ubiquitous and disastrous influence on students’ conceptions of moral behavior. This article sets forth the views of some of these critics and juxtaposes them with what Dewey actually believed and wrote regarding character education. The juxtaposition demonstrates that Dewey neither called for nor exemplified the kinds of character-eroding pedagogy his critics accuse him of championing; in addition, this paper highlights the ways in which Dewey argued consistently and convincingly that the pedagogical approaches advocated by his critics are the real culprits in the decline of character and moral education

    Computational approaches to explainable artificial intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.MCIU - Nvidia(UMA18-FEDERJA-084

    Improvement in the Reproducibility and Accuracy of DNA Microarray Quantification by Optimizing Hybridization Conditions

    Get PDF
    BACKGROUND: DNA microarrays, which have been increasingly used to monitor mRNA transcripts at a global level, can provide detailed insight into cellular processes involved in response to drugs and toxins. This is leading to new understandings of signaling networks that operate in the cell, and the molecular basis of diseases. Custom printed oligonucleotide arrays have proven to be an effective way to facilitate the applications of DNA microarray technology. A successful microarray experiment, however, involves many steps: well-designed oligonucleotide probes, printing, RNA extraction and labeling, hybridization, and imaging. Optimization is essential to generate reliable microarray data. RESULTS: Hybridization and washing steps are crucial for a successful microarray experiment. By following the hybridization and washing conditions recommended by an oligonucleotide provider, it was found that the expression ratios were compressed greater than expected and data analysis revealed a high degree of non-specific binding. A series of experiments was conducted using rat mixed tissue RNA reference material (MTRRM) and other RNA samples to optimize the hybridization and washing conditions. The optimized hybridization and washing conditions greatly reduced the non-specific binding and improved the accuracy of spot intensity measurements. CONCLUSION: The results from the optimized hybridization and washing conditions greatly improved the reproducibility and accuracy of expression ratios. These experiments also suggested the importance of probe designs using better bioinformatics approaches and the need for common reference RNA samples for platform performance evaluation in order to fulfill the potential of DNA microarray technology
    corecore