328 research outputs found

    The Case for Public Interventions during a Pandemic

    Get PDF
    Funding Information: This work has been supported by Marie SkƂodowska Curie Actions ITN AffecTech (ERC H2020 Project 1059 ID: 722022). Publisher Copyright: © 2022 by the authors.Within the field of movement sensing and sound interaction research, multi-user systems have gradually gained interest as a means to facilitate an expressive non-verbal dialogue. When tied with studies grounded in psychology and choreographic theory, we consider the qualities of interaction that foster an elevated sense of social connectedness, non-contingent to occupying one’s personal space. Upon reflection of the newly adopted social distancing concept, we orchestrate a technological intervention, starting with interpersonal distance and sound at the core of interaction. Materialised as a set of sensory face-masks, a novel wearable system was developed and tested in the context of a live public performance from which we obtain the user’s individual perspectives and correlate this with patterns identified in the recorded data. We identify and discuss traits of the user’s behaviour that were accredited to the system’s influence and construct four fundamental design considerations for physically distanced sound interaction. The study concludes with essential technical reflections, accompanied by an adaptation for a pervasive sensory intervention that is finally deployed in an open public space.publishersversionpublishe

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferĂȘncia de informação entre organismos, mas este fenĂłmeno estĂĄ constantemente a adaptar-se a novos padrĂ”es tecnolĂłgicos e culturais. Num contexto digital, Ă© comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a anĂĄlise comportamental fundamentada na investigação psicolĂłgica chama a atenção para a informação emocional revelada por sinais sociais nĂŁo verbais, em particular, acçÔes que sĂŁo involuntĂĄrias. Esta noção circulou fortemente em vĂĄrios campos interdisciplinares de investigação na ĂĄrea das ciĂȘncias da computação, dos quais surgiram mĂșltiplos estudos, correlacionando a actividade nĂŁoverbal com inferĂȘncias sĂłcio-afectivas. Estes sĂŁo frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alteraçÔes bioelĂ©ctricas “invisĂ­veis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiolĂłgicos como um recurso expressivo para interacçÔes mediadas pela tecnologia. Iniciada a partir de uma discussĂŁo aprofundada sobre tecnologias de ponta e princĂ­pios de concepção estabelecidos relativamente a este tĂłpico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos prĂĄticos, para complementar esta. Defendemos a experiĂȘncia estĂ©tica, experimentando com representaçÔes abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção nĂŁo Ă© inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, nĂŁo confinada ao domĂ­nio verbal. Dada a proposta preliminar de nĂŁo representação, justificamos uma correspondĂȘncia com estratĂ©gias modernas de Machine Learning e interacção multimĂ©dia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes atravĂ©s de fabricaçÔes inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensĂŁo. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princĂ­pios de design somaestĂ©tico; Latent Steps, um modelo autoencoder em tempo real para representar experiĂȘncias corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervençÔes no espaço pĂșblico, analisando a distĂąncia fĂ­sica para gerar uma paisagem sonora colectiva. Os principais resultados sĂŁo extraĂ­dos dos relatĂłrios individuais, para formular um quadro tĂ©cnico e teĂłrico alargado para expandir sobre este tĂłpico. Os projectos tĂȘm como primeiro objectivo abraçar algumas perspectivas alternativas Ă s que jĂĄ estĂŁo estabelecidas no Ăąmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das prĂĄticas criativas e tĂ©cnicas contemporĂąneas com o avanço das tecnologias biomĂ©dicas

    Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game Play

    Get PDF
    This study is part of a larger project that showed the potential of our mixed reality (MR) system in fostering social initiation behaviors in children with Autism Spectrum Condition (ASC). We compared it to a typical social intervention strategy based on construction tools, where both mediated a face-to-face dyadic play session between an ASC child and a non-ASC child. In this study, our first goal is to show that an MR platform can be utilized to alter the nonverbal body behavior between ASC and non-ASC during social interaction as much as a traditional therapy setting (LEGO). A second goal is to show how these body cues differ between ASC and non-ASC children during social initiation in these two platforms. We present our first analysis of the body cues generated under two conditions in a repeated-measures design. Body cue measurements were obtained through skeleton information and characterized in the form of spatio-temporal features from both subjects individually (e.g. distances between joints and velocities of joints), and interpersonally (e.g. proximity and visual focus of attention). We used machine learning techniques to analyze the visual data of eighteen trials of ASC and non-ASC dyads. Our experiments showed that: (i) there were differences between ASC and non-ASC bodily expressions, both at individual and interpersonal level, in LEGO and in the MR system during social initiation; (ii) the number of features indicating differences between ASC and non-ASC in terms of nonverbal behavior during initiation were higher in the MR system as compared to LEGO; and (iii) computational models evaluated with combination of these different features enabled the recognition of social initiation type (ASC or non-ASC) from body features in LEGO and in MR settings. We did not observe significant differences between the evaluated models in terms of performance for LEGO and MR environments. This might be interpreted as the MR system encouraging similar nonverbal behaviors in children, perhaps more similar than the LEGO environment, as the performance scores in the MR setting are lower as compared to the LEGO setting. These results demonstrate the potential benefits of full body interaction and MR settings for children with ASC.EPSR

    Touchomatic: interpersonal touch gaming in the wild

    Get PDF
    Direct touch between people is a key element of social behaviour. Recently a number of researchers have explored games which sense aspects of such interpersonal touch to control interaction with a multiplayer computer game. In this paper, we describe a long term, in-the-wild study of a two-player arcade game which is controlled by gentle touching between the body parts of two players. We ran the game in a public videogame arcade for a year, and present a thematic analysis of 27 hours of gameplay session videos, organized under three top level themes: control of the system, interpersonal interaction within the game, and social interaction around the game. In addition, we provide a quantitative analysis of observed demographic differences in interpersonal touch behaviour. Finally, we use these results to present four design recommendations for use of interpersonal touch in games

    On the Integration of Adaptive and Interactive Robotic Smart Spaces

    Get PDF
    © 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe

    The Impact of Social Expectation towards Robots on Human-Robot Interactions

    Get PDF
    This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension. This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle. The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction
    • 

    corecore