230,194 research outputs found

    Social touch in human–computer interaction

    Get PDF
    Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better research methodologies, developing basic social touch building blocks, and solving specific ICT challenges

    Social Touch

    Get PDF
    Interpersonal or social touch is an intuitive and powerful way to express and communicate emotions, comfort a friend, bond with teammates, comfort a child in pain, and soothe someone who is stressed. If there is one thing that the current pandemic is showing us, it is that social distancing can make some people crave physical interaction through social touch. The notion of “skin-hunger” has become tangible for many.Social touch differs at a functional and anatomical level from discriminative touch, and has clear effects at physiological, emotional, and behavioural levels. Social touch is a topic in psychology (perception, emotion, behaviour), neuroscience (neurophysiological pathways), computer science (mediated touch communication), engineering (haptic devices), robotics (social robots that can touch), humanities (science and technology studies), and sociology (the social implications of touch). Our current scientific knowledge of social touch is scattered across disciplines and not yet adequate for the purpose of meeting today's challenges of connecting human beings through the mediating channel of technology

    MENINGKATKAN SUMBERDAYA GENERASI MUDA MELALUI PENINGKATAN KEMAMPUAN PROFESIONAL TENAGA PENDIDIKAN

    Get PDF
    This thesis project is an interaction design study, which studies how finger touch gestures can be used as expressive alternatives to text comments on social networking sites. In the study qualitative research methods and a user-centred approach are used. The study collects literature on how emotion is modeled in Human-computer Interaction and how emotion can be expressed through touch. The popular social networking site Facebook is used as a case study of user behavior on social networking sites and as a starting point for the design of the interaction. A user study was conducted with two participants with much experience of the mobile Facebook application. The results of the study are five design ideas that are based on previous in the research area and from feedback from the participants of the user study. The interaction of two of the design ideas were developed into simple web prototypes to see if the functionality could be implemented. This thesis project is an exploratory beginning on the use of finger touch gestures for expression of emotions social networking sites. These design ideas will have to be developed into usable prototypes and tested with users in future research

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses

    Interactive skin through a social- sensory speculative lens

    Get PDF
    This paper uses a speculative lens to explore the social and sensory trajectories of Interactive Skin, a class of skin-worn epidermal devices that augment the human body in ways that are significant for affective techno-touch. The paper presents and discusses the use of a speculative narrative on Interactive Skin futures produced through an exploratory research-collaboration with a Human–Computer Interaction (HCI) lab, combining data from speculative methods (cultural probe returns and a future-orientated workshop) with an ethnographic sensitivity to writing. The speculative narrative is in the form of a found archive of fictional fragments that are research provocations in their own right. We discuss their potentials, including the ability to foster interdisciplinary dialogue between social and HCI researchers and to agitate the socio-technological space of interactive skin futures, as well as their limitations. The paper concludes that a socially orientated speculative approach can provide useful insights on the interconnection between the senses, society, and technology in the context of emergent affective techno-touch technologies

    Detecting and Classifying Human Touches in a Social Robot Through Acoustic Sensing and Machine Learning

    Get PDF
    An important aspect in Human-Robot Interaction is responding to different kinds of touch stimuli. To date, several technologies have been explored to determine how a touch is perceived by a social robot, usually placing a large number of sensors throughout the robot's shell. In this work, we introduce a novel approach, where the audio acquired from contact microphones located in the robot's shell is processed using machine learning techniques to distinguish between different types of touches. The system is able to determine when the robot is touched (touch detection), and to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle (touch classification). This proposal is cost-effective since just a few microphones are able to cover the whole robot's shell since a single microphone is enough to cover each solid part of the robot. Besides, it is easy to install and configure as it just requires a contact surface to attach the microphone to the robot's shell and plug it into the robot's computer. Results show the high accuracy scores in touch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best performance, with an F-score of 0.81. The dataset was built with information from 25 participants performing a total of 1981 touch gestures.The research leading to these results has received funding from the projects: Development of social robots to help seniors with cognitive impairment (ROBSEN), funded by the Ministerio de Economia y Competitividad; and RoboCity2030-III-CM, funded by Comunidad de Madrid and cofunded by Structural Funds of the EU.Publicad

    Gestures and Interaction: exploiting natural abilities in the design of interactive systems

    Get PDF
    Collana seminari interni 2012, Number 20120606.This talk explores the role of gestures in computer supported collaboration. People make extensive use of non-verbal forms of communication when they interact with each other in everyday life: of these, gestures are relatively easy to observe and quantify. However, the role of gestures in human computer interaction so far has been focused mainly on using conventional signs like visible commands, rather than on exploiting all nuances of such natural human skill. We propose a perspective on natural interaction that builds on recent advances in tangible interaction, embodiment and computer supported collaborative work. We consider the social and cognitive aspects of gestures and manipulations to support our claim of a primacy of tangible and multi-touch interfaces, and describe our experiences focused on assessing the suitability of such interface paradigms to traditional application scenarios

    Editorial

    Full text link
    This special issue seeks to provoke, challenge, and inspire more multimodal scholars to engage with and interrogate touch. Collectively the contributions situate touch as part of a multimodal and multisensorial experience at the intersection of the body, technology and environment. The contributions offer different routes to critically explore the social, sensory and affective roles of touch in a changing communicational and interactional landscape. They draw on approaches from multimodality, ethnography, material engagement theory, Human Computer Interaction, speculative research, as well as artistic and design-based research. To situate the special issue, we give a brief overview of why touch matters and outline the extended view of touch that informs it. We comment on the challenges of researching touch and suggest the potential of multimodality as one way forward, and we point to the benefits of combining multimodality with other approaches

    Human-Robot interaction with low computational-power humanoids

    Get PDF
    This article investigates the possibilities of human-humanoid interaction with robots whose computational power is limited. The project has been carried during a year of work at the Computer and Robot Vision Laboratory (VisLab), part of the Institute for Systems and Robotics in Lisbon, Portugal. Communication, the basis of interaction, is simultaneously visual, verbal, and gestural. The robot's algorithm provides users a natural language communication, being able to catch and understand the person’s needs and feelings. The design of the system should, consequently, give it the capability to dialogue with people in a way that makes possible the understanding of their needs. The whole experience, to be natural, is independent from the GUI, used just as an auxiliary instrument. Furthermore, the humanoid can communicate with gestures, touch and visual perceptions and feedbacks. This creates a totally new type of interaction where the robot is not just a machine to use, but a figure to interact and talk with: a social robot
    • 

    corecore