171,957 research outputs found

    Printed synaptic transistor–based electronic skin for robots to feel and learn

    Get PDF
    An electronic skin (e-skin) for the next generation of robots is expected to have biological skin-like multimodal sensing, signal encoding, and preprocessing. To this end, it is imperative to have high-quality, uniformly responding electronic devices distributed over large areas and capable of delivering synaptic behavior with long- and short-term memory. Here, we present an approach to realize synaptic transistors (12-by-14 array) using ZnO nanowires printed on flexible substrate with 100% yield and high uniformity. The presented devices show synaptic behavior under pulse stimuli, exhibiting excitatory (inhibitory) post-synaptic current, spiking rate-dependent plasticity, and short-term to long-term memory transition. The as-realized transistors demonstrate excellent bio-like synaptic behavior and show great potential for in-hardware learning. This is demonstrated through a prototype computational e-skin, comprising event-driven sensors, synaptic transistors, and spiking neurons that bestow biological skin-like haptic sensations to a robotic hand. With associative learning, the presented computational e-skin could gradually acquire a human body–like pain reflex. The learnt behavior could be strengthened through practice. Such a peripheral nervous system–like localized learning could substantially reduce the data latency and decrease the cognitive load on the robotic platform

    Cortical Learning of Recognition Categories: A Resolution of the Exemplar Vs. Prototype Debate

    Full text link
    Do humans and animals learn exemplars or prototypes when they categorize objects and events in the world? How are different degrees of abstraction realized through learning by neurons in inferotemporal and prefrontal cortex? How do top-down expectations influence the course of learning? Thirty related human cognitive experiments (the 5-4 category structure) have been used to test competing views in the prototype-exemplar debate. In these experiments, during the test phase, subjects unlearn in a characteristic way items that they had learned to categorize perfectly in the training phase. Many cognitive models do not describe how an individual learns or forgets such categories through time. Adaptive Resonance Theory (ART) neural models provide such a description, and also clarify both psychological and neurobiological data. Matching of bottom-up signals with learned top-down expectations plays a key role in ART model learning. Here, an ART model is used to learn incrementally in response to 5-4 category structure stimuli. Simulation results agree with experimental data, achieving perfect categorization in training and a good match to the pattern of errors exhibited by human subjects in the testing phase. These results show how the model learns both prototypes and certain exemplars in the training phase. ART prototypes are, however, unlike the ones posited in the traditional prototype-exemplar debate. Rather, they are critical patterns of features to which a subject learns to pay attention based on past predictive success and the order in which exemplars are experienced. Perturbations of old memories by newly arriving test items generate a performance curve that closely matches the performance pattern of human subjects. The model also clarifies exemplar-based accounts of data concerning amnesia.Defense Advanced Projects Research Agency SyNaPSE program (Hewlett-Packard Company, DARPA HR0011-09-3-0001; HRL Laboratories LLC #801881-BS under HR0011-09-C-0011); Science of Learning Centers program of the National Science Foundation (NSF SBE-0354378

    "Sticky Hands": learning and generalization for cooperative physical interactions with a humanoid robot

    Get PDF
    "Sticky Hands" is a physical game for two people involving gentle contact with the hands. The aim is to develop relaxed and elegant motion together, achieve physical sensitivity-improving reactions, and experience an interaction at an intimate yet comfortable level for spiritual development and physical relaxation. We developed a control system for a humanoid robot allowing it to play Sticky Hands with a human partner. We present a real implementation including a physical system, robot control, and a motion learning algorithm based on a generalizable intelligent system capable itself of generalizing observed trajectories' translation, orientation, scale and velocity to new data, operating with scalable speed and storage efficiency bounds, and coping with contact trajectories that evolve over time. Our robot control is capable of physical cooperation in a force domain, using minimal sensor input. We analyze robot-human interaction and relate characteristics of our motion learning algorithm with recorded motion profiles. We discuss our results in the context of realistic motion generation and present a theoretical discussion of stylistic and affective motion generation based on, and motivating cross-disciplinary research in computer graphics, human motion production and motion perception

    Motivations, Classification and Model Trial of Conversational Agents for Insurance Companies

    Full text link
    Advances in artificial intelligence have renewed interest in conversational agents. So-called chatbots have reached maturity for industrial applications. German insurance companies are interested in improving their customer service and digitizing their business processes. In this work we investigate the potential use of conversational agents in insurance companies by determining which classes of agents are of interest to insurance companies, finding relevant use cases and requirements, and developing a prototype for an exemplary insurance scenario. Based on this approach, we derive key findings for conversational agent implementation in insurance companies.Comment: 12 pages, 6 figure, accepted for presentation at The International Conference on Agents and Artificial Intelligence 2019 (ICAART 2019

    Brain Categorization: Learning, Attention, and Consciousness

    Full text link
    How do humans and animals learn to recognize objects and events? Two classical views are that exemplars or prototypes are learned. A hybrid view is that a mixture, called rule-plus-exceptions, is learned. None of these models learn their categories. A distributed ARTMAP neural network with self-supervised learning incrementally learns categories that match human learning data on a class of thirty diagnostic experiments called the 5-4 category structure. Key predictions of ART models have received behavioral, neurophysiological, and anatomical support. The ART prediction about what goes wrong during amnesic learning has also been supported: A lesion in its orienting system causes a low vigilance parameter.Air Force Office of Scientific Research (F49620-01-1-0397, F49620-01-1-0423); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-01-1-0624), the National Geospatial Intelligence Agency (NMA 201-01-1-2016); National Science Foundation (EIA-01-30851, IIS-97-20333, SBE-0354378); Office of Naval Research (N00014-95-1-0657, N00014-01-1-0624

    What influences the speed of prototyping? An empirical investigation of twenty software startups

    Full text link
    It is essential for startups to quickly experiment business ideas by building tangible prototypes and collecting user feedback on them. As prototyping is an inevitable part of learning for early stage software startups, how fast startups can learn depends on how fast they can prototype. Despite of the importance, there is a lack of research about prototyping in software startups. In this study, we aimed at understanding what are factors influencing different types of prototyping activities. We conducted a multiple case study on twenty European software startups. The results are two folds, firstly we propose a prototype-centric learning model in early stage software startups. Secondly, we identify factors occur as barriers but also facilitators for prototyping in early stage software startups. The factors are grouped into (1) artifacts, (2) team competence, (3) collaboration, (4) customer and (5) process dimensions. To speed up a startups progress at the early stage, it is important to incorporate the learning objective into a well-defined collaborative approach of prototypingComment: This is the author's version of the work. Copyright owner's version can be accessed at doi.org/10.1007/978-3-319-57633-6_2, XP2017, Cologne, German

    Novelty and Reinforcement Learning in the Value System of Developmental Robots

    Get PDF
    The value system of a developmental robot signals the occurrence of salient sensory inputs, modulates the mapping from sensory inputs to action outputs, and evaluates candidate actions. In the work reported here, a low level value system is modeled and implemented. It simulates the non-associative animal learning mechanism known as habituation effect. Reinforcement learning is also integrated with novelty. Experimental results show that the proposed value system works as designed in a study of robot viewing angle selection
    corecore