42,416 research outputs found

    Annotated Bibliography: Anticipation

    Get PDF

    Statistical Physics and Representations in Real and Artificial Neural Networks

    Full text link
    This document presents the material of two lectures on statistical physics and neural representations, delivered by one of us (R.M.) at the Fundamental Problems in Statistical Physics XIV summer school in July 2017. In a first part, we consider the neural representations of space (maps) in the hippocampus. We introduce an extension of the Hopfield model, able to store multiple spatial maps as continuous, finite-dimensional attractors. The phase diagram and dynamical properties of the model are analyzed. We then show how spatial representations can be dynamically decoded using an effective Ising model capturing the correlation structure in the neural data, and compare applications to data obtained from hippocampal multi-electrode recordings and by (sub)sampling our attractor model. In a second part, we focus on the problem of learning data representations in machine learning, in particular with artificial neural networks. We start by introducing data representations through some illustrations. We then analyze two important algorithms, Principal Component Analysis and Restricted Boltzmann Machines, with tools from statistical physics

    Modeling Big Medical Survival Data Using Decision Tree Analysis with Apache Spark

    Get PDF
    In many medical studies, an outcome of interest is not only whether an event occurred, but when an event occurred; and an example of this is Alzheimer’s disease (AD). Identifying patients with Mild Cognitive Impairment (MCI) who are likely to develop Alzheimer’s disease (AD) is highly important for AD treatment. Previous studies suggest that not all MCI patients will convert to AD. Massive amounts of data from longitudinal and extensive studies on thousands of Alzheimer’s patients have been generated. Building a computational model that can predict conversion form MCI to AD can be highly beneficial for early intervention and treatment planning for AD. This work presents a big data model that contains machine-learning techniques to determine the level of AD in a participant and predict the time of conversion to AD. The proposed framework considers one of the widely used screening assessment for detecting cognitive impairment called Montreal Cognitive Assessment (MoCA). MoCA data set was collected from different centers and integrated into our large data framework storage using a Hadoop Data File System (HDFS); the data was then analyzed using an Apache Spark framework. The accuracy of the proposed framework was compared with a semi-parametric Cox survival analysis model

    Brain enhancement through cognitive training: A new insight from brain connectome

    Get PDF
    Owing to the recent advances in neurotechnology and the progress in understanding of brain cognitive functions, improvements of cognitive performance or acceleration of learning process with brain enhancement systems is not out of our reach anymore, on the contrary, it is a tangible target of contemporary research. Although a variety of approaches have been proposed, we will mainly focus on cognitive training interventions, in which learners repeatedly perform cognitive tasks to improve their cognitive abilities. In this review article, we propose that the learning process during the cognitive training can be facilitated by an assistive system monitoring cognitive workloads using electroencephalography (EEG) biomarkers, and the brain connectome approach can provide additional valuable biomarkers for facilitating leaners' learning processes. For the purpose, we will introduce studies on the cognitive training interventions, EEG biomarkers for cognitive workload, and human brain connectome. As cognitive overload and mental fatigue would reduce or even eliminate gains of cognitive training interventions, a real-time monitoring of cognitive workload can facilitate the learning process by flexibly adjusting difficulty levels of the training task. Moreover, cognitive training interventions should have effects on brain sub-networks, not on a single brain region, and graph theoretical network metrics quantifying topological architecture of the brain network can differentiate with respect to individual cognitive states as well as to different individuals' cognitive abilities, suggesting that the connectome is a valuable approach for tracking the learning progress. Although only a few studies have exploited the connectome approach for studying alterations of the brain network induced by cognitive training interventions so far, we believe that it would be a useful technique for capturing improvements of cognitive function

    A Metacognitive Approach to Trust and a Case Study: Artificial Agency

    Get PDF
    Trust is defined as a belief of a human H (‘the trustor’) about the ability of an agent A (the ‘trustee’) to perform future action(s). We adopt here dispositionalism and internalism about trust: H trusts A iff A has some internal dispositions as competences. The dispositional competences of A are high-level metacognitive requirements, in the line of a naturalized virtue epistemology. (Sosa, Carter) We advance a Bayesian model of two (i) confidence in the decision and (ii) model uncertainty. To trust A, H demands A to be self-assertive about confidence and able to self-correct its own models. In the Bayesian approach trust can be applied not only to humans, but to artificial agents (e.g. Machine Learning algorithms). We explain the advantage the metacognitive trust when compared to mainstream approaches and how it relates to virtue epistemology. The metacognitive ethics of trust is swiftly discussed
    corecore