28,261 research outputs found

    Initial experiences in developing e-health solutions across Scotland

    Get PDF
    The MRC funded Virtual Organisations for Trials and Epidemiological Studies (VOTES) project is a collaborative effort between e-Science, clinical and ethical research centres across the UK including the universities of Oxford, Glasgow, Imperial, Nottingham and Leicester. The project started in September 2005 and is due to run for 3 years. The primary goal of VOTES is to develop a reusable Grid framework through which a multitude of clinical trials and epidemiological studies can be supported. The National e-Science Centre (NeSC) at the University of Glasgow are looking at developing the Scottish components of this framework. This paper presents the initial experiences in developing this framework and in accessing and using existing data sets, services and software across the NHS in Scotland

    Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment

    Get PDF
    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalised Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a ``Learning with privileged information'' approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants.MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls based on the learning performance and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on the learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal for structured stimuli is used as inputs to the classifier, the post-training session is most relevant; and (2) when the graph feature reflecting underlying spatiotemporal fMRI pattern is used, the pre-training session is most relevant. Taken together these results suggest that brain connectivity before training and overall fMRI signal after training are both diagnostic of cognitive skills in MCI

    Interpretable Medical Image Classification using Prototype Learning and Privileged Information

    Full text link
    Interpretability is often an essential requirement in medical imaging. Advanced deep learning methods are required to address this need for explainability and high performance. In this work, we investigate whether additional information available during the training process can be used to create an understandable and powerful model. We propose an innovative solution called Proto-Caps that leverages the benefits of capsule networks, prototype learning and the use of privileged information. Evaluating the proposed solution on the LIDC-IDRI dataset shows that it combines increased interpretability with above state-of-the-art prediction performance. Compared to the explainable baseline model, our method achieves more than 6 % higher accuracy in predicting both malignancy (93.0 %) and mean characteristic features of lung nodules. Simultaneously, the model provides case-based reasoning with prototype representations that allow visual validation of radiologist-defined attributes.Comment: MICCAI 2023 Medical Image Computing and Computer Assisted Interventio

    The LAB@FUTURE Project - Moving Towards the Future of E-Learning

    Get PDF
    This paper presents Lab@Future, an advanced e-learning platform that uses novel Information and Communication Technologies to support and expand laboratory teaching practices. For this purpose, Lab@Future uses real and computer-generated objects that are interfaced using mechatronic systems, augmented reality, mobile technologies and 3D multi user environments. The main aim is to develop and demonstrate technological support for practical experiments in the following focused subjects namely: Fluid Dynamics - Science subject in Germany, Geometry - Mathematics subject in Austria, History and Environmental Awareness – Arts and Humanities subjects in Greece and Slovenia. In order to pedagogically enhance the design and functional aspects of this e-learning technology, we are investigating the dialogical operationalisation of learning theories so as to leverage our understanding of teaching and learning practices in the targeted context of deployment

    Visual world studies of conversational perspective taking: similar findings, diverging interpretations

    Get PDF
    Visual-world eyetracking greatly expanded the potential for insight into how listeners access and use common ground during situated language comprehension. Past reviews of visual world studies on perspective taking have largely taken the diverging findings of the various studies at face value, and attributed these apparently different findings to differences in the extent to which the paradigms used by different labs afford collaborative interaction. Researchers are asking questions about perspective taking of an increasingly nuanced and sophisticated nature, a clear indicator of progress. But this research has the potential not only to improve our understanding of conversational perspective taking. Grappling with problems of data interpretation in such a complex domain has the unique potential to drive visual world researchers to a deeper understanding of how to best map visual world data onto psycholinguistic theory. I will argue against this interactional affordances explanation, on two counts. First, it implies that interactivity affects the overall ability to form common ground, and thus provides no straightforward explanation of why, within a single noninteractive study, common ground can have very large effects on some aspects of processing (referential anticipation) while having negligible effects on others (lexical processing). Second, and more importantly, the explanation accepts the divergence in published findings at face value. However, a closer look at several key studies shows that the divergences are more likely to reflect inconsistent practices of analysis and interpretation that have been applied to an underlying body of data that is, in fact, surprisingly consistent. The diverging interpretations, I will argue, are the result of differences in the handling of anticipatory baseline effects (ABEs) in the analysis of visual world data. ABEs arise in perspective-taking studies because listeners have earlier access to constraining information about who knows what than they have to referential speech, and thus can already show biases in visual attention even before the processing of any referential speech has begun. To be sure, these ABEs clearly indicate early access to common ground; however, access does not imply integration, since it is possible that this information is not used later to modulate the processing of incoming speech. Failing to account for these biases using statistical or experimental controls leads to over-optimistic assessments of listeners’ ability to integrate this information with incoming speech. I will show that several key studies with varying degrees of interactional affordances all show similar temporal profiles of common ground use during the interpretive process: early anticipatory effects, followed by bottom-up effects of lexical processing that are not modulated by common ground, followed (optionally) by further late effects that are likely to be post-lexical. Furthermore, this temporal profile for common ground radically differs from the profile of contextual effects related to verb semantics. Together, these findings are consistent with the proposal that lexical processes are encapsulated from common ground, but cannot be straightforwardly accounted for by probabilistic constraint-based approaches

    Do scale alliances with competitors improve product performace? A study of the Aircraft industry, 1949-2000

    Get PDF
    This paper investigates the performance impact of undertaking activities through scale alliances with competitors rather than undertaking these activities autonomously.strategy; governance; alliance; cooperation; performance; scale economies; aircraft

    Classifying Cognitive Profiles Using Machine Learning with Privileged Information in Mild Cognitive Impairment

    Get PDF
    Early diagnosis of dementia is critical for assessing disease progression and potential treatment. State-or-the-art machine learning techniques have been increasingly employed to take on this diagnostic task. In this study, we employed Generalized Matrix Learning Vector Quantization (GMLVQ) classifiers to discriminate patients with Mild Cognitive Impairment (MCI) from healthy controls based on their cognitive skills. Further, we adopted a "Learning with privileged information" approach to combine cognitive and fMRI data for the classification task. The resulting classifier operates solely on the cognitive data while it incorporates the fMRI data as privileged information (PI) during training. This novel classifier is of practical use as the collection of brain imaging data is not always possible with patients and older participants. MCI patients and healthy age-matched controls were trained to extract structure from temporal sequences. We ask whether machine learning classifiers can be used to discriminate patients from controls and whether differences between these groups relate to individual cognitive profiles. To this end, we tested participants in four cognitive tasks: working memory, cognitive inhibition, divided attention, and selective attention. We also collected fMRI data before and after training on a probabilistic sequence learning task and extracted fMRI responses and connectivity as features for machine learning classifiers. Our results show that the PI guided GMLVQ classifiers outperform the baseline classifier that only used the cognitive data. In addition, we found that for the baseline classifier, divided attention is the only relevant cognitive feature. When PI was incorporated, divided attention remained the most relevant feature while cognitive inhibition became also relevant for the task. Interestingly, this analysis for the fMRI GMLVQ classifier suggests that (1) when overall fMRI signal is used as inputs to the classifier, the post-training session is most relevant; and (2) when the graph feature reflecting underlying spatiotemporal fMRI pattern is used, the pre-training session is most relevant. Taken together these results suggest that brain connectivity before training and overall fMRI signal after training are both diagnostic of cognitive skills in MCI.PT and YS were supported by EPSRC grant no EP/L000296/1 “Personalized Medicine through Learning in the Model Space.” This work was supported by grants to ZK from the Biotechnology and Biological Sciences Research Council (H012508), the Leverhulme Trust (RF-2011-378), and the (European Community's) Seventh Framework Programme (FP7/2007-2013) under agreement PITN-GA-2011-290011
    • …
    corecore