114 research outputs found

    Student Use of the Internet for Research Projects: A Problem? Our Problem? What Can We Do About It?

    Get PDF
    The Internet and other electronic media have changed the way undergraduate students conduct research. The effects of this technological change on the role of the professor are still not well understood. This article reports on the findings of a recent study that evaluated the scholarly content of student citations in a political science course and tested two interventions designed to improve their quality. The study finds that these students’ use of electronic sources was not as poor as some may have assumed, and that the quality of bibliographies improved when in-class instruction was combined with academic penalties. This article reflects on the study’s findings, and offers suggestions for how instructors might encourage students to improve the quality of their research

    Unsupervised Medical Image Translation Using Cycle-MedGAN

    Full text link
    Image-to-image translation is a new field in computer vision with multiple potential applications in the medical domain. However, for supervised image translation frameworks, co-registered datasets, paired in a pixel-wise sense, are required. This is often difficult to acquire in realistic medical scenarios. On the other hand, unsupervised translation frameworks often result in blurred translated images with unrealistic details. In this work, we propose a new unsupervised translation framework which is titled Cycle-MedGAN. The proposed framework utilizes new non-adversarial cycle losses which direct the framework to minimize the textural and perceptual discrepancies in the translated images. Qualitative and quantitative comparisons against other unsupervised translation approaches demonstrate the performance of the proposed framework for PET-CT translation and MR motion correction.Comment: Submitted to EUSIPCO 2019, 5 page

    Anomaly Detection for Vision-based Railway Inspection

    Get PDF
    none7nomixedRiccardo Gasparini; Stefano Pini; Guido Borghi; Giuseppe Scaglione; Simone Calderara; Eugenio Fedeli; Rita CucchiaraRiccardo Gasparini; Stefano Pini; Guido Borghi; Giuseppe Scaglione; Simone Calderara; Eugenio Fedeli; Rita Cucchiar

    GAN-based multiple adjacent brain MRI slice reconstruction for unsupervised alzheimer’s disease diagnosis

    Get PDF
    Unsupervised learning can discover various unseen diseases, relying on large-scale unannotated medical images of healthy subjects. Towards this, unsupervised methods reconstruct a single medical image to detect outliers either in the learned feature space or from high reconstruction loss. However, without considering continuity between multiple adjacent slices, they cannot directly discriminate diseases composed of the accumulation of subtle anatomical anomalies, such as Alzheimer's Disease (AD). Moreover, no study has shown how unsupervised anomaly detection is associated with disease stages. Therefore, we propose a two-step method using Generative Adversarial Network-based multiple adjacent brain MRI slice reconstruction to detect AD at various stages: (Reconstruction) Wasserstein loss with Gradient Penalty + L1 loss---trained on 3 healthy slices to reconstruct the next 3 ones---reconstructs unseen healthy/AD cases; (Diagnosis) Average/Maximum loss (e.g., L2 loss) per scan discriminates them, comparing the reconstructed/ground truth images. The results show that we can reliably detect AD at a very early stage with Area Under the Curve (AUC) 0.780 while also detecting AD at a late stage much more accurately with AUC 0.917; since our method is fully unsupervised, it should also discover and alert any anomalies including rare disease.Comment: 10 pages, 4 figures, Accepted to Lecture Notes in Bioinformatics (LNBI) as a volume in the Springer serie

    Challenge 6: Ethical, legal, economic, and social implications

    Get PDF
    In six decades of history, AI has become a mature and strategic discipline, successfully embedded in mainstream ICT and powering innumerable online applications and platforms. Several official documents stating specific AI policies have been produced by international organisations ( like the OCDE ), regional bodies ( EU ), several countries ( US, China, Spain, Germany, UK, Sweden, Brazil, Mexico...) as well as major AI-powered firms ( Google, Facebook, Amazon ). These examples demonstrate public interest and awareness of the economic and societal value of AI and the urgency of discussing the ethical, legal, economic and social implications of deploying AI systems on a massive scale. There is widespread agreement about the relevancy of addressing ethical aspects of AI, an urgency to demonstrate AI is used for the common good, and the need for better training, education and regulation to foster responsible research and innovation in AI. This chapter is organised around four main areas : ethics, law, economics and society ( ELES ). These areas shape the development of AI research and innovation, which in turn, influence these four areas of human activity. This interplay opens questions and demands new methods, objectives and ways to design future technologies. This chapter identifies the main impacts and salient challenges in each of these four areas.Peer reviewe

    Weakly-Supervised Evidence Pinpointing and Description

    Full text link
    We propose a learning method to identify which specific regions and features of images contribute to a certain classification. In the medical imaging context, they can be the evidence regions where the abnormalities are most likely to appear, and the discriminative features of these regions supporting the pathology classification. The learning is weakly-supervised requiring only the pathological labels and no other prior knowledge. The method can also be applied to learn the salient description of an anatomy discriminative from its background, in order to localise the anatomy before a classification step. We formulate evidence pinpointing as a sparse descriptor learning problem. Because of the large computational complexity, the objective function is composed in a stochastic way and is optimised by the Regularised Dual Averaging algorithm. We demonstrate that the learnt feature descriptors contain more specific and better discriminative information than hand-crafted descriptors contributing to superior performance for the tasks of anatomy localisation and pathology classification respectively. We apply our method on the problem of lumbar spinal stenosis for localising and classifying vertebrae in MRI images. Experimental results show that our method when trained with only target labels achieves better or competitive performance on both tasks compared with strongly-supervised methods requiring labels and multiple landmarks. A further improvement is achieved with training on additional weakly annotated data, which gives robust localisation with average error within 2 mm and classification accuracies close to human performance

    Building ProteomeTools based on a complete synthetic human proteome.

    Get PDF
    We describe ProteomeTools, a project building molecular and digital tools from the human proteome to facilitate biomedical research. Here we report the generation and multimodal liquid chromatography-tandem mass spectrometry analysis of \u3e330,000 synthetic tryptic peptides representing essentially all canonical human gene products, and we exemplify the utility of these data in several applications. The resource (available at http://www.proteometools.org) will be extended to \u3e1 million peptides, and all data will be shared with the community via ProteomicsDB and ProteomeXchange
    corecore