179 research outputs found

    Calibration of the SNO+ experiment

    Get PDF
    The main goal of the SNO+ experiment is to perform a low-background and high-isotope-mass search for neutrinoless double-beta decay, employing 780 tonnes of liquid scintillator loaded with tellurium, in its initial phase at 0.5% by mass for a total mass of 1330 kg of (130)Te. The SNO+ physics program includes also measurements of geo- and reactor neutrinos, supernova and solar neutrinos. Calibrations are an essential component of the SNO+ data-taking and analysis plan. The achievement of the physics goals requires both an extensive and regular calibration. This serves several goals: the measurement of several detector parameters, the validation of the simulation model and the constraint of systematic uncertainties on the reconstruction and particle identification algorithms. SNO+ faces stringent radiopurity requirements which, in turn, largely determine the materials selection, sealing and overall design of both the sources and deployment systems. In fact, to avoid frequent access to the inner volume of the detector, several permanent optical calibration systems have been developed and installed outside that volume. At the same time, the calibration source internal deployment system was re-designed as a fully sealed system, with more stringent material selection, but following the same working principle as the system used in SNO. This poster described the overall SNO+ calibration strategy, discussed the several new and innovative sources, both optical and radioactive, and covered the developments on source deployment systems.Peer Reviewe

    Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting

    Get PDF
    Advances in artificial intelligence have cultivated a strong interest in developing and validating the clinical utilities of computer-aided diagnostic models. Machine learning for diagnostic neuroimaging has often been applied to detect psychological and neurological disorders, typically on small-scale datasets or data collected in a research setting. With the collection and collation of an ever-growing number of public datasets that researchers can freely access, much work has been done in adapting machine learning models to classify these neuroimages by diseases such as Alzheimer’s, ADHD, autism, bipolar disorder, and so on. These studies often come with the promise of being implemented clinically, but despite intense interest in this topic in the laboratory, limited progress has been made in clinical implementation. In this review, we analyze challenges specific to the clinical implementation of diagnostic AI models for neuroimaging data, looking at the differences between laboratory and clinical settings, the inherent limitations of diagnostic AI, and the different incentives and skill sets between research institutions, technology companies, and hospitals. These complexities need to be recognized in the translation of diagnostic AI for neuroimaging from the laboratory to the clinic.</p

    Scapegoat: John Dewey and the character education crisis

    Get PDF
    Many conservatives, including some conservative scholars, blame the ideas and influence of John Dewey for what has frequently been called a crisis of character, a catastrophic decline in moral behavior in the schools and society of North America. Dewey’s critics claim that he is responsible for the undermining of the kinds of instruction that could lead to the development of character and the strengthening of the will, and that his educational philosophy and example exert a ubiquitous and disastrous influence on students’ conceptions of moral behavior. This article sets forth the views of some of these critics and juxtaposes them with what Dewey actually believed and wrote regarding character education. The juxtaposition demonstrates that Dewey neither called for nor exemplified the kinds of character-eroding pedagogy his critics accuse him of championing; in addition, this paper highlights the ways in which Dewey argued consistently and convincingly that the pedagogical approaches advocated by his critics are the real culprits in the decline of character and moral education

    Computational approaches to explainable artificial intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.MCIU - Nvidia(UMA18-FEDERJA-084

    The long noncoding RNA lncNB1 promotes tumorigenesis by interacting with ribosomal protein RPL35

    Get PDF
    The majority of patients with neuroblastoma due to MYCN oncogene amplification and consequent N-Myc oncoprotein over-expression die of the disease. Here our analyses of RNA sequencing data identify the long noncoding RNA lncNB1 as one of the transcripts most over-expressed in MYCN-amplified, compared with MYCN-non-amplified, human neuroblastoma cells and also the&nbsp;most over-expressed in neuroblastoma compared with all other cancers. lncNB1 binds to the ribosomal protein RPL35 to enhance E2F1 protein synthesis, leading to DEPDC1B gene transcription. The GTPase-activating protein DEPDC1B induces ERK protein phosphorylation and N-Myc protein stabilization. Importantly, lncNB1 knockdown abolishes neuroblastoma cell clonogenic capacity in vitro and leads to neuroblastoma tumor regression in mice, while high levels of lncNB1 and RPL35 in human neuroblastoma tissues predict poor patient prognosis. This study therefore identifies lncNB1 and its binding protein RPL35 as key factors for promoting E2F1 protein synthesis, N-Myc protein stability and N-Myc-driven oncogenesis, and as therapeutic targets

    Measurement of the 8B solar neutrino flux in SNO+ with very low backgrounds

    Get PDF
    A measurement of the 8B solar neutrino flux has been made using a 69.2 kt-day dataset acquired with the SNO+ detector during its water commissioning phase. At energies above 6 MeV the dataset is an extremely pure sample of solar neutrino elastic scattering events, owing primarily to the detector’s deep location, allowing an accurate measurement with relatively little exposure. In that energy region the best fit background rate is 0.25+0.09−0.07  events/kt−day, significantly lower than the measured solar neutrino event rate in that energy range, which is 1.03+0.13−0.12  events/kt−day. Also using data below this threshold, down to 5 MeV, fits of the solar neutrino event direction yielded an observed flux of 2.53+0.31−0.28(stat)+0.13−0.10(syst)×106  cm−2 s−1, assuming no neutrino oscillations. This rate is consistent with matter enhanced neutrino oscillations and measurements from other experiments

    Revealing the missing expressed genes beyond the human reference genome by RNA-Seq

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The complete and accurate human reference genome is important for functional genomics researches. Therefore, the incomplete reference genome and individual specific sequences have significant effects on various studies.</p> <p>Results</p> <p>we used two RNA-Seq datasets from human brain tissues and 10 mixed cell lines to investigate the completeness of human reference genome. First, we demonstrated that in previously identified ~5 Mb Asian and ~5 Mb African novel sequences that are absent from the human reference genome of NCBI build 36, ~211 kb and ~201 kb of them could be transcribed, respectively. Our results suggest that many of those transcribed regions are not specific to Asian and African, but also present in Caucasian. Then, we found that the expressions of 104 RefSeq genes that are unalignable to NCBI build 37 in brain and cell lines are higher than 0.1 RPKM. 55 of them are conserved across human, chimpanzee and macaque, suggesting that there are still a significant number of functional human genes absent from the human reference genome. Moreover, we identified hundreds of novel transcript contigs that cannot be aligned to NCBI build 37, RefSeq genes and EST sequences. Some of those novel transcript contigs are also conserved among human, chimpanzee and macaque. By positioning those contigs onto the human genome, we identified several large deletions in the reference genome. Several conserved novel transcript contigs were further validated by RT-PCR.</p> <p>Conclusion</p> <p>Our findings demonstrate that a significant number of genes are still absent from the incomplete human reference genome, highlighting the importance of further refining the human reference genome and curating those missing genes. Our study also shows the importance of <it>de novo </it>transcriptome assembly. The comparative approach between reference genome and other related human genomes based on the transcriptome provides an alternative way to refine the human reference genome.</p

    Computational Approaches to Explainable Artificial Intelligence:Advances in Theory, Applications and Trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications

    Computational approaches to Explainable Artificial Intelligence:Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9th International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications.</p
    corecore