328 research outputs found
Recommended from our members
Proxemics of screen mediation: engagement with reading on screen manifests as diminished variation due to self-control, rather than diminished mean distance from screen
Objective: Burgoon's theory of conversational involvement suggest that when people engage with a person, they will move slightly closer to them, often subtly and subconsciously. However, some studies have failed to extend this to human-computer interaction. Our hypothesis is that during online reading, engagement is associated with an expenditure of effort to hold the head upright, still and centrally.
Method: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5x27 cm monitor two reading stimuli in a counterbalanced order, one (interesting) based on a best selling novel and the other (boring) based on European Union banking regulations. The participants were video-recorded during their reading while they wore reflective motion tracking markers. The markers were video-tracked off-line using Kinovea 0.8.
Results: Subjective VAS ratings showed that the stimuli elicited the bored and interested states as expected. Video tracking showed that the boring stimulus (compared to the interesting reading) elicited a greater head-to-screen velocity, a greater head-to-screen distance range, a greater head-to-screen distance standard deviation, but not a further away head-to-screen mean distance.
Conclusions: The more interesting reading led to efforts to control the head to a more central viewing position while suppressing head fidgeting
The Case for Public Interventions during a Pandemic
Funding Information: This work has been supported by Marie SkĆodowska Curie Actions ITN AffecTech (ERC H2020 Project 1059 ID: 722022). Publisher Copyright: © 2022 by the authors.Within the field of movement sensing and sound interaction research, multi-user systems have gradually gained interest as a means to facilitate an expressive non-verbal dialogue. When tied with studies grounded in psychology and choreographic theory, we consider the qualities of interaction that foster an elevated sense of social connectedness, non-contingent to occupying oneâs personal space. Upon reflection of the newly adopted social distancing concept, we orchestrate a technological intervention, starting with interpersonal distance and sound at the core of interaction. Materialised as a set of sensory face-masks, a novel wearable system was developed and tested in the context of a live public performance from which we obtain the userâs individual perspectives and correlate this with patterns identified in the recorded data. We identify and discuss traits of the userâs behaviour that were accredited to the systemâs influence and construct four fundamental design considerations for physically distanced sound interaction. The study concludes with essential technical reflections, accompanied by an adaptation for a pervasive sensory intervention that is finally deployed in an open public space.publishersversionpublishe
NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS
Historically, communication implies the transfer of information between bodies, yet this
phenomenon is constantly adapting to new technological and cultural standards. In a
digital context, itâs commonplace to envision systems that revolve around verbal modalities.
However, behavioural analysis grounded in psychology research calls attention to
the emotional information disclosed by non-verbal social cues, in particular, actions that
are involuntary. This notion has circulated heavily into various interdisciplinary computing
research fields, from which multiple studies have arisen, correlating non-verbal
activity to socio-affective inferences. These are often derived from some form of motion
capture and other wearable sensors, measuring the âinvisibleâ bioelectrical changes that
occur from inside the body.
This thesis proposes a motivation and methodology for using physiological sensory
data as an expressive resource for technology-mediated interactions. Initialised from a
thorough discussion on state-of-the-art technologies and established design principles
regarding this topic, then applied to a novel approach alongside a selection of practice
works to compliment this. We advocate for aesthetic experience, experimenting with
abstract representations. Atypically from prevailing Affective Computing systems, the
intention is not to infer or classify emotion but rather to create new opportunities for rich
gestural exchange, unconfined to the verbal domain.
Given the preliminary proposition of non-representation, we justify a correspondence
with modern Machine Learning and multimedia interaction strategies, applying an iterative,
human-centred approach to improve personalisation without the compromising
emotional potential of bodily gesture. Where related studies in the past have successfully
provoked strong design concepts through innovative fabrications, these are typically limited
to simple linear, one-to-one mappings and often neglect multi-user environments;
we foresee a vast potential. In our use cases, we adopt neural network architectures to
generate highly granular biofeedback from low-dimensional input data.
We present the following proof-of-concepts: Breathing Correspondence, a wearable
biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance;
and Anti-Social Distancing Ensemble, an installation for public space interventions,
analysing physical distance to generate a collective soundscape. Key findings are
extracted from the individual reports to formulate an extensive technical and theoretical
framework around this topic. The projects first aim to embrace some alternative perspectives
already established within Affective Computing research. From here, these concepts
evolve deeper, bridging theories from contemporary creative and technical practices with
the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferĂȘncia de informação
entre organismos, mas este fenómeno estå constantemente a adaptar-se a novos padrÔes
tecnolĂłgicos e culturais. Num contexto digital, Ă© comum encontrar sistemas que giram
em torno de modalidades verbais. Contudo, a anĂĄlise comportamental fundamentada
na investigação psicológica chama a atenção para a informação emocional revelada por
sinais sociais não verbais, em particular, acçÔes que são involuntårias. Esta noção circulou
fortemente em vĂĄrios campos interdisciplinares de investigação na ĂĄrea das ciĂȘncias da
computação, dos quais surgiram mĂșltiplos estudos, correlacionando a actividade nĂŁoverbal
com inferĂȘncias sĂłcio-afectivas. Estes sĂŁo frequentemente derivados de alguma
forma de captura de movimento e sensores âwearableâ, medindo as alteraçÔes bioelĂ©ctricas
âinvisĂveisâ que ocorrem no interior do corpo.
Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais
fisiológicos como um recurso expressivo para interacçÔes mediadas pela tecnologia.
Iniciada a partir de uma discussĂŁo aprofundada sobre tecnologias de ponta e princĂpios
de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem,
juntamente com uma selecção de trabalhos pråticos, para complementar esta.
Defendemos a experiĂȘncia estĂ©tica, experimentando com representaçÔes abstractas. Contrariamente
aos sistemas de Computação Afectiva predominantes, a intenção não é inferir
ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual,
nĂŁo confinada ao domĂnio verbal.
Dada a proposta preliminar de nĂŁo representação, justificamos uma correspondĂȘncia
com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma
abordagem iterativa e centrada no ser humano para melhorar a personalização sem o
potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores
demonstraram com sucesso conceitos de design fortes através de fabricaçÔes
inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um,
e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos
um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes
neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensĂŁo.
Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema
de biofeedback wearable inspirado nos princĂpios de design somaestĂ©tico; Latent
Steps, um modelo autoencoder em tempo real para representar experiĂȘncias corporais
a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing
Ensemble, uma instalação para intervençÔes no espaço pĂșblico, analisando a
distĂąncia fĂsica para gerar uma paisagem sonora colectiva. Os principais resultados sĂŁo
extraĂdos dos relatĂłrios individuais, para formular um quadro tĂ©cnico e teĂłrico alargado
para expandir sobre este tĂłpico. Os projectos tĂȘm como primeiro objectivo abraçar algumas
perspectivas alternativas às que jå estão estabelecidas no ùmbito da investigação
da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente,
fazendo a ponte entre as teorias das pråticas criativas e técnicas contemporùneas com o
avanço das tecnologias biomédicas
Bodily Expression of Social Initiation Behaviors in ASC and non-ASC children: Mixed Reality vs. LEGO Game Play
This study is part of a larger project that showed the potential of our
mixed reality (MR) system in fostering social initiation behaviors
in children with Autism Spectrum Condition (ASC). We compared
it to a typical social intervention strategy based on construction
tools, where both mediated a face-to-face dyadic play session between
an ASC child and a non-ASC child. In this study, our first
goal is to show that an MR platform can be utilized to alter the
nonverbal body behavior between ASC and non-ASC during social
interaction as much as a traditional therapy setting (LEGO). A second
goal is to show how these body cues differ between ASC and
non-ASC children during social initiation in these two platforms.
We present our first analysis of the body cues generated under two
conditions in a repeated-measures design. Body cue measurements
were obtained through skeleton information and characterized in
the form of spatio-temporal features from both subjects individually
(e.g. distances between joints and velocities of joints), and
interpersonally (e.g. proximity and visual focus of attention). We
used machine learning techniques to analyze the visual data of eighteen
trials of ASC and non-ASC dyads. Our experiments showed
that: (i) there were differences between ASC and non-ASC bodily
expressions, both at individual and interpersonal level, in LEGO
and in the MR system during social initiation; (ii) the number of features
indicating differences between ASC and non-ASC in terms of
nonverbal behavior during initiation were higher in the MR system
as compared to LEGO; and (iii) computational models evaluated
with combination of these different features enabled the recognition
of social initiation type (ASC or non-ASC) from body features in
LEGO and in MR settings. We did not observe significant differences
between the evaluated models in terms of performance for LEGO
and MR environments. This might be interpreted as the MR system
encouraging similar nonverbal behaviors in children, perhaps more
similar than the LEGO environment, as the performance scores in
the MR setting are lower as compared to the LEGO setting. These
results demonstrate the potential benefits of full body interaction
and MR settings for children with ASC.EPSR
Touchomatic: interpersonal touch gaming in the wild
Direct touch between people is a key element of social behaviour. Recently a number of researchers have explored games which sense aspects of such interpersonal touch to control interaction with a multiplayer computer game. In this paper, we describe a long term, in-the-wild study of a two-player arcade game which is controlled by gentle touching between the body parts of two players. We ran the game in a public videogame arcade for a year, and present a thematic analysis of 27 hours of gameplay session videos, organized under three top level themes: control of the system, interpersonal interaction within the game, and social interaction around the game. In addition, we provide a quantitative analysis of observed demographic differences in interpersonal touch behaviour. Finally, we use these results to present four design recommendations for use of interpersonal touch in games
On the Integration of Adaptive and Interactive Robotic Smart Spaces
© 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the userâs acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree â to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving usersâ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe
The Impact of Social Expectation towards Robots on Human-Robot Interactions
This work is presented in defence of the thesis that it is possible to measure the social expectations and perceptions that humans have of robots in an explicit and succinct manner, and these measures are related to how humans interact with, and evaluate, these robots. There are many ways of understanding how humans may respond to, or reason about, robots as social actors, but the approach that was adopted within this body of work was one which focused on interaction-specific expectations, rather than expectations regarding the true nature of the robot. These expectations were investigated using a questionnaire-based tool, the University of Hertfordshire Social Roles Questionnaire, which was developed as part of the work presented in this thesis and tested on a sample of 400 visitors to an exhibition in the Science Gallery in Dublin. This study suggested that responses to this questionnaire loaded on two main dimensions, one which related to the degree of social equality the participants expected the interactions with the robots to have, and the other was related to the degree of control they expected to exert upon the robots within the interaction. A single item, related to pet-like interactions, loaded on both and was considered a separate, third dimension.
This questionnaire was deployed as part of a proxemics study, which found that the degree to which participants accepted particular proxemics behaviours was correlated with initial social expectations of the robot. If participants expected the robot to be more of a social equal, then the participants preferred the robot to approach from the front, while participants who viewed the robot more as a tool preferred it to approach from a less obtrusive angle.
The questionnaire was also deployed in two long-term studies. In the first study, which involved one interaction a week over a period of two months, participant social expectations of the robots prior to the beginning of the study, not only impacted how participants evaluated open-ended interactions with the robots throughout the two-month period, but also how they collaborated with the robots in task-oriented interactions as well. In the second study, participants interacted with the robots twice a week over a period of 6 weeks. This study replicated the findings of the previous study, in that initial expectations impacted evaluations of interactions throughout the long-term study. In addition, this study used the questionnaire to measure post-interaction perceptions of the robots in terms of social expectations. The results from these suggest that while initial social expectations of robots impact how participants evaluate the robots in terms of interactional outcomes, social perceptions of robots are more closely related to the social/affective experience of the interaction
- âŠ