990 research outputs found

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    The Theory of Neural Cognition Applied to Robotics

    No full text
    International audienceThe Theory of neural Cognition (TnC) states that the brain does not process information, it only represents informa‐ tion (i.e., it is 'only' a memory). The TnC explains how a memory can become an actor pursuing various goals, and proposes explanations concerning the implementation of a large variety of cognitive abilities, such as attention, memory, language, planning, intelligence, emotions, motivation, pleasure, consciousness and personality. The explanatory power of this new framework extends further though, to tackle special psychological states such as hypnosis, the placebo effect and sleep, and brain diseases such as autism, Alzheimer's disease and schizophrenia. The most interesting findings concern robotics: because the TnC considers the cortical column to be the key cognitive unit (instead of the neuron), it reduces the requirements for a brain implementation to only 160,000 units (instead of 86 billion). A robot exhibiting human-like cognitive abilities is therefore within our reach

    Naturally Rehearsing Passwords

    Full text link
    We introduce quantitative usability and security models to guide the design of password management schemes --- systematic strategies to help users create and remember multiple passwords. In the same way that security proofs in cryptography are based on complexity-theoretic assumptions (e.g., hardness of factoring and discrete logarithm), we quantify usability by introducing usability assumptions. In particular, password management relies on assumptions about human memory, e.g., that a user who follows a particular rehearsal schedule will successfully maintain the corresponding memory. These assumptions are informed by research in cognitive science and validated through empirical studies. Given rehearsal requirements and a user's visitation schedule for each account, we use the total number of extra rehearsals that the user would have to do to remember all of his passwords as a measure of the usability of the password scheme. Our usability model leads us to a key observation: password reuse benefits users not only by reducing the number of passwords that the user has to memorize, but more importantly by increasing the natural rehearsal rate for each password. We also present a security model which accounts for the complexity of password management with multiple accounts and associated threats, including online, offline, and plaintext password leak attacks. Observing that current password management schemes are either insecure or unusable, we present Shared Cues--- a new scheme in which the underlying secret is strategically shared across accounts to ensure that most rehearsal requirements are satisfied naturally while simultaneously providing strong security. The construction uses the Chinese Remainder Theorem to achieve these competing goals

    IMU sensing–based Hopfield neuromorphic computing for human activity recognition

    Get PDF
    Aiming at the self-association feature of the Hopfield neural network, we can reduce the need for extensive sensor training samples during human behavior recognition. For a training algorithm to obtain a general activity feature template with only one time data preprocessing, this work proposes a data preprocessing framework that is suitable for neuromorphic computing. Based on the preprocessing method of the construction matrix and feature extraction, we achieved simplification and improvement in the classification of output of the Hopfield neuromorphic algorithm. We assigned different samples to neurons by constructing a feature matrix, which changed the weights of different categories to classify sensor data. Meanwhile, the preprocessing realizes the sensor data fusion process, which helps improve the classification accuracy and avoids falling into the local optimal value caused by single sensor data. Experimental results show that the framework has high classification accuracy with necessary robustness. Using the proposed method, the classification and recognition accuracy of the Hopfield neuromorphic algorithm on the three classes of human activities is 96.3%. Compared with traditional machine learning algorithms, the proposed framework only requires learning samples once to get the feature matrix for human activities, complementing the limited sample databases while improving the classification accuracy

    Coding serial position in working memory in the healthy and demented brain

    Get PDF

    New bottles for new and old wine : new proposals for the study of spontaneous trait inferences

    Get PDF
    Tese de doutoramento, Psicologia (Cognição Social), Universidade de Lisboa, Faculdade de Psicologia, 2017An important research topic in social cognition concerns the way people understand others’behaviors and the way they use this information to categorize others and infer causes for their actions. More specifically, in this dissertation, we investigated the Spontaneous Trait Inference (STI), a phenomenon that allow people to infer or extract personality traits from others’ overt behaviors and to use those traits to make further judgments. It is a spontaneous mechanism because it occurs without intention or awareness. The dissertation is organized in two parts that deal will two distinct aspects of STI. The first aspect regards the processes responsible for the occurrence of STI. The second is about the paradigms used to detect STI and their limitations. In the first part, we discuss the two perspectives that exist in the literature regarding the processes underlying STI. These two perspectives emerged as a reaction to the discovery of a surprising phenomenon, the Spontaneous Trait Transferences (STT). STTs occur when a trait is inferred from a behavior and associated with someone else that not the actor: a communicator, a bystander or any other irrelevant stimulus present in the context at the encoding moment. Based on empirical differences between STI and STT, a dualistic perspective was proposed in which STI are said to result from attributional thinking and STT from simple associations. A different perspective suggests that the same process, an associative one, can be responsible for both. Our contribution to this debate was to develop a computational model in order to demonstrate that the evidence supporting a dualist view are weak, because a simple associative model can reproduce, not only STI and STT, but also the empirical differences between them. Moreover, as an assumption of the model, we argued that there might be an attentional difference between STI and STT. Thus, next we tested this assumption by using the spatial cueing paradigm and eye-tracking devices, which allowed us to conclude that people pay more attention to the actor of a behavior than to an irrelevant person presented with it. Also in agreement with the attentional difference and with the model, we showed, by using forced recognition paradigm, that in both STI and STT the trait is inferred in a similar way from the behavior, whereas the memory for the photo is better in STI than in STT. In the second part of the dissertation, we discuss the main methodologies used to measure STI. We start by examining a confound present in many studies investigating STI, the word-based priming. This confound consists in the activation of the trait based, not on the interpretation of the whole sentence and the behavior in it described, but on the presence of specific words that alone lead to the priming of the trait. Moreover, we showed that this is only a problem for the immediate measures of STI such probe recognition paradigm, but not for delayed, such the false recognition. A different limitation that affects all the measures based on memory is the contamination with explicit recall of the sentence. The use of online measures can solve, in part, that problem. However, online measures are data-driven, or, in other words, are measures that rely on feature and perceptive processing. This characteristic makes them unsuitable for STI, that is a conceptuallydriven mechanism. Thus, we introduce a new implicit conceptual measure, the modified free association task. In this task people first read trait-implying or control material. Afterwards, they perform a free association task were a word (the inferred trait) is presented and the subject is instructed to say the first word that comes to their mind when reading the presented target. We tested this new paradigm in delayed and immediate modes and we also tested its sensitivity do STI and STT difference

    Designing a training tool for imaging mental models

    Get PDF
    The training process can be conceptualized as the student acquiring an evolutionary sequence of classification-problem solving mental models. For example a physician learns (1) classification systems for patient symptoms, diagnostic procedures, diseases, and therapeutic interventions and (2) interrelationships among these classifications (e.g., how to use diagnostic procedures to collect data about a patient's symptoms in order to identify the disease so that therapeutic measures can be taken. This project developed functional specifications for a computer-based tool, Mental Link, that allows the evaluative imaging of such mental models. The fundamental design approach underlying this representational medium is traversal of virtual cognition space. Typically intangible cognitive entities and links among them are visible as a three-dimensional web that represents a knowledge structure. The tool has a high degree of flexibility and customizability to allow extension to other types of uses, such a front-end to an intelligent tutoring system, knowledge base, hypermedia system, or semantic network

    Advances in Reinforcement Learning

    Get PDF
    Reinforcement Learning (RL) is a very dynamic area in terms of theory and application. This book brings together many different aspects of the current research on several fields associated to RL which has been growing rapidly, producing a wide variety of learning algorithms for different applications. Based on 24 Chapters, it covers a very broad variety of topics in RL and their application in autonomous systems. A set of chapters in this book provide a general overview of RL while other chapters focus mostly on the applications of RL paradigms: Game Theory, Multi-Agent Theory, Robotic, Networking Technologies, Vehicular Navigation, Medicine and Industrial Logistic
    corecore