3,898 research outputs found

    A new look at joint attention and common knowledge

    Get PDF
    Everyone agrees that joint attention is a key feature of human social cognition. Yet, despite over 40 years of work and hundreds of publications on this topic, there is still surprisingly little agreement on what exactly joint attention is, and how the jointness in it is achieved. Part of the problem, we propose, is that joint attention is not a single process, but rather it includes a cluster of different cognitive skills and processes, and different researchers focus on different aspects of it. A similar problem applies to common knowledge. Here we present a new approach: We outline a typology of social attention levels which are currently all referred to in the literature as joint attention (from monitoring to common, mutual, and shared attention), along with corresponding levels of common knowledge. We consider cognitive, behavioral, and phenomenological aspects of the different levels as well as their different functions, and a key distinction we make in all of this is second-personal vs. third-personal relations. While we focus mainly on joint attention and common knowledge, we also briefly discuss how these levels might apply to other ‘joint’ mental states such as joint goals.PostprintPeer reviewe

    Developing an Affect-Aware Rear-Projected Robotic Agent

    Get PDF
    Social (or Sociable) robots are designed to interact with people in a natural and interpersonal manner. They are becoming an integrated part of our daily lives and have achieved positive outcomes in several applications such as education, health care, quality of life, entertainment, etc. Despite significant progress towards the development of realistic social robotic agents, a number of problems remain to be solved. First, current social robots either lack enough ability to have deep social interaction with human, or they are very expensive to build and maintain. Second, current social robots have yet to reach the full emotional and social capabilities necessary for rich and robust interaction with human beings. To address these problems, this dissertation presents the development of a low-cost, flexible, affect-aware rear-projected robotic agent (called ExpressionBot), that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural face-to-face communication. The developed robotic platform uses state-of-the-art character animation technologies to create an animated human face (aka avatar) that is capable of showing facial expressions, realistic eye movement, and accurate visual speech, and then project this avatar onto a face-shaped translucent mask. The mask and the projector are then rigged onto a neck mechanism that can move like a human head. Since an animation is projected onto a mask, the robotic face is highly flexible research tool, mechanically simple, and low-cost to design, build and maintain compared with mechatronic and android faces. The results of our comprehensive Human-Robot Interaction (HRI) studies illustrate the benefits and values of the proposed rear-projected robotic platform over a virtual-agent with the same animation displayed on a 2D computer screen. The results indicate that ExpressionBot is well accepted by users, with some advantages in expressing facial expressions more accurately and perceiving mutual eye gaze contact. To improve social capabilities of the robot and create an expressive and empathic social agent (affect-aware) which is capable of interpreting users\u27 emotional facial expressions, we developed a new Deep Neural Networks (DNN) architecture for Facial Expression Recognition (FER). The proposed DNN was initially trained on seven well-known publicly available databases, and obtained significantly better than, or comparable to, traditional convolutional neural networks or other state-of-the-art methods in both accuracy and learning time. Since the performance of the automated FER system highly depends on its training data, and the eventual goal of the proposed robotic platform is to interact with users in an uncontrolled environment, a database of facial expressions in the wild (called AffectNet) was created by querying emotion-related keywords from different search engines. AffectNet contains more than 1M images with faces and 440,000 manually annotated images with facial expressions, valence, and arousal. Two DNNs were trained on AffectNet to classify the facial expression images and predict the value of valence and arousal. Various evaluation metrics show that our deep neural network approaches trained on AffectNet can perform better than conventional machine learning methods and available off-the-shelf FER systems. We then integrated this automated FER system into spoken dialog of our robotic platform to extend and enrich the capabilities of ExpressionBot beyond spoken dialog and create an affect-aware robotic agent that can measure and infer users\u27 affect and cognition. Three social/interaction aspects (task engagement, being empathic, and likability of the robot) are measured in an experiment with the affect-aware robotic agent. The results indicate that users rated our affect-aware agent as empathic and likable as a robot in which user\u27s affect is recognized by a human (WoZ). In summary, this dissertation presents the development and HRI studies of a perceptive, and expressive, conversational, rear-projected, life-like robotic agent (aka ExpressionBot or Ryan) that models natural face-to-face communication between human and emapthic agent. The results of our in-depth human-robot-interaction studies show that this robotic agent can serve as a model for creating the next generation of empathic social robots

    Technology of swallowable capsule for medical applications

    Get PDF
    Medical technology has undergone major breakthroughs in recent years, especially in the area of the examination tools for diagnostic purposes. This paper reviews the swallowable capsule technology in the examination of the gastrointestinal system for various diseases. The wireless camera pill has created a more advanced method than many traditional examination methods for the diagnosis of gastrointestinal diseases such as gastroscopy by the use of an endoscope. After years of great innovation, commercial swallowable pills have been produced and applied in clinical practice. These smart pills can cover the examination of the gastrointestinal system and not only provide to the physicians a lot more useful data that is not available from the traditional methods, but also eliminates the use of the painful endoscopy procedure. In this paper, the key state-of-the-art technologies in the existing Wireless Capsule Endoscopy (WCE) systems are fully reported and the recent research progresses related to these technologies are reviewed. The paper ends by further discussion on the current technical bottlenecks and future research in this area

    Development of skills in children with ASD using a robotic platform

    Get PDF
    The interaction and communication skills are essential to live in society. However, individuals with autism spectrum disorders (ASD) have a gap in these abilities which affects their daily life. Previous studies suggest that children with ASD demonstrate some positive behaviors in presence of a robotic platform. This study intends to evaluate the effect of a robotic platform on children with ASD, checking if the platform can be a stimulating agent for children's interaction, as well as a skill learning promoter. So, it is used the robot Lego Mindstorms NXT as a mediator/reward to encourage children with ASD to interact with others and also to learn some cognitive skills.The authors are grateful to teachers and students of the primary and secondary schools of Aver-o-Mar and their parents for their participation in the project. The authors are also grateful to the Portuguese Foundation for funding through the R&D project RIPD/ADA/109407/2009

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 182, July 1978

    Get PDF
    This bibliography lists 165 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1978
    • …
    corecore