20 research outputs found
What does not happen: quantifying embodied engagement using NIMI and self-adaptors
Previous research into the quantification of embodied intellectual and emotional engagement using non-verbal movement parameters has not yielded consistent results across different studies. Our research introduces NIMI (Non-Instrumental Movement Inhibition) as an alternative parameter. We propose that the absence of certain types of possible movements can be a more holistic proxy for cognitive engagement with media (in seated persons) than searching for the presence of other movements. Rather than analyzing total movement as an indicator of engagement, our research team distinguishes between instrumental movements (i.e. physical movement serving a direct purpose in the given situation) and non-instrumental movements, and investigates them in the context of the narrative rhythm of the stimulus. We demonstrate that NIMI occurs by showing viewers’ movement levels entrained (i.e. synchronised) to the repeating narrative rhythm of a timed computer-presented quiz. Finally, we discuss the role of objective metrics of engagement in future context-aware analysis of human behaviour in audience research, interactive media and responsive system and interface design
A time series feature of variability to detect two types of boredom from motion capture of the head and shoulders
Boredom and disengagement metrics are crucial to the correctly timed implementation of adaptive interventions in interactive systems. psychological research suggests that boredom (which other HCI teams have been able to partially quantify with pressure-sensing chair mats) is actually a composite: lethargy and restlessness. Here we present an innovative approach to the measurement and recognition of these two kinds of boredom, based on motion capture and video analysis of changes in head and shoulder positions. Discrete, three-minute, computer-presented stimuli (games, quizzes, films and music) covering a spectrum from engaging to boring/disengaging were used to elicit changes in cognitive/emotional states in seated, healthy volunteers. Interaction with the stimuli occurred with a handheld trackball instead of a mouse, so movements were assumed to be non-instrumental. Our results include a feature (standard deviation of windowed ranges) that may be more specific to boredom than mean speed of head movement, and that could be implemented in computer vision algorithms for disengagement detection
Using body language indicators for assessing the effects of soundscape quality on individuals
“Sounding Brighton” is a collaborative project exploring practical approaches towards better soundscapes focusing on soundscape issues related to health, quality of life and restorative functions of the environment. The project is part of a citywide engagement process working to provide opportunities to demonstrate how an applied soundscape approach might: tackle conventional noise problems, contribute to local planning and improve the environment in areas including urban green spaces, the built environment and traffic noise. So far, a soundscape map of the city has been developed, and a public outreach exhibition and conferences have taken place. One preliminary, experimental soundscape intervention in night noise has been analysed.
This paper reports on further work to develop a better understanding of the effects of soundscapes on individual and community responses to soundscape through the use of body language indicators. Two-minute excerpts of aversive and preferred music were presented to 11 healthy volunteers in a motion-capture laboratory setting. Their responses were quantified computationally using motion-capture-derived parameters for position, absolute movement speed, and stillness. The prevalence of stillness of the head height (based on a 2 cm cut-off during 2-second sectors) was significantly lower when volunteers were exposed to unpleasant music compared to preferred music. This experiment provides proof in principle that changes in soundscape can be associated with subsequent, objective and statistically significant changes in body language that can be detected computationally
Non-Instrumental Movement Inhibition (NIMI) differentially suppresses head and thigh movements during screenic engagement: dependence on interaction
BACKGROUND:
Estimating engagement levels from postural micromovements has been summarized by some researchers as: increased proximity to the screen is a marker for engagement, while increased postural movement is a signal for disengagement or negative affect. However, these findings are inconclusive: the movement hypothesis challenges other findings of dyadic interaction in humans, and experiments on the positional hypothesis diverge from it.
HYPOTHESES:
(1) Under controlled conditions, adding a relevant visual stimulus to an auditory stimulus will preferentially result in Non-Instrumental Movement Inhibition (NIMI) of the head. (2) When instrumental movements are eliminated and computer-interaction rate is held constant, for two identically-structured stimuli, cognitive engagement (i.e., interest) will result in measurable NIMI of the body generally.
METHODS:
Twenty-seven healthy participants were seated in front of a computer monitor and speakers. Discrete 3-min stimuli were presented with interactions mediated via a handheld trackball without any keyboard, to minimize instrumental movements of the participant's body. Music videos and audio-only music were used to test hypothesis (1). Time-sensitive, highly interactive stimuli were used to test hypothesis (2). Subjective responses were assessed via visual analog scales. The computer users' movements were quantified using video motion tracking from the lateral aspect. Repeated measures ANOVAs with Tukey post hoc comparisons were performed.
RESULTS:
For two equivalently-engaging music videos, eliminating the visual content elicited significantly increased non-instrumental movements of the head (while also decreasing subjective engagement); a highly engaging user-selected piece of favorite music led to further increased non-instrumental movement. For two comparable reading tasks, the more engaging reading significantly inhibited (42%) movement of the head and thigh; however, when a highly engaging video game was compared to the boring reading, even though the reading task and the game had similar levels of interaction (trackball clicks), only thigh movement was significantly inhibited, not head movement.
CONCLUSIONS:
NIMI can be elicited by adding a relevant visual accompaniment to an audio-only stimulus or by making a stimulus cognitively engaging. However, these results presume that all other factors are held constant, because total movement rates can be affected by cognitive engagement, instrumental movements, visual requirements, and the time-sensitivity of the stimulus
Recommended from our members
Proxemics of screen mediation: engagement with reading on screen manifests as diminished variation due to self-control, rather than diminished mean distance from screen
Objective: Burgoon's theory of conversational involvement suggest that when people engage with a person, they will move slightly closer to them, often subtly and subconsciously. However, some studies have failed to extend this to human-computer interaction. Our hypothesis is that during online reading, engagement is associated with an expenditure of effort to hold the head upright, still and centrally.
Method: We presented to 27 participants (ages 21.00 ± 2.89, 15 female) seated in front of 47.5x27 cm monitor two reading stimuli in a counterbalanced order, one (interesting) based on a best selling novel and the other (boring) based on European Union banking regulations. The participants were video-recorded during their reading while they wore reflective motion tracking markers. The markers were video-tracked off-line using Kinovea 0.8.
Results: Subjective VAS ratings showed that the stimuli elicited the bored and interested states as expected. Video tracking showed that the boring stimulus (compared to the interesting reading) elicited a greater head-to-screen velocity, a greater head-to-screen distance range, a greater head-to-screen distance standard deviation, but not a further away head-to-screen mean distance.
Conclusions: The more interesting reading led to efforts to control the head to a more central viewing position while suppressing head fidgeting
Tracking nutrient decisions in Drosophila melanogaster
Animals integrate external sensory information and current metabolic needs to adapt their behavior in order to survive. Accordingly, many organisms can detect an internal nutritional imbalance and adjust their nutritional choices to restore homeostasis. Detailed quantitative analyses of nutrient-choice behaviors are needed to deepen our understanding of how neural circuits integrate internal state information and drive compensatory behavior when facing metabolic challenges. During this project, we developed an automated video tracking setup to characterize how metabolic and reproductive states interact to shape exploitation and exploration decisions taken by the adult fruit fly Drosophila melanogaster, to achieve nutritional homeostasis. We find that these two states have specific effects on the decisions to stop on and leave proteinaceous food patches. Furthermore, the internal nutrient state defines the exploration-exploitation trade-off: nutrient deprived flies focus on specific patches while satiated flies explore more globally. We provide few examples of how our paradigm could be used in the dissection of the genetic and neuronal pathways underlying nutrient decisions: First, we show that olfaction is not required for the compensatory high yeast feeding after amino acid deprivation, but that it mediates the efficient recognition of yeast as an appropriate food source in mated females. Second, we show that octopamine is required to mediate homeostatic postmating responses without affecting internal nutrient sensing. Third, we show how gustation is required to sustain interest for protein-rich resources upon amino acid deprivation. Our results provide a quantitative description of how the fly changes behavioral decisions to achieve homeostatic nutrient balancing and provide a framework for future detailed mechanistic dissection of such decisions
Tailoring Interaction. Sensing Social Signals with Textiles.
Nonverbal behaviour is an important part of conversation and can reveal much about the nature of an interaction. It includes phenomena ranging from large-scale posture shifts to small scale nods. Capturing these often spontaneous phenomena requires unobtrusive sensing techniques that do not interfere with the interaction. We propose an underexploited sensing modality for sensing nonverbal behaviours: textiles. As a material in close contact with the body, they provide ubiquitous, large surfaces that make them a suitable soft interface. Although the literature on nonverbal communication focuses on upper body movements such as gestures, observations of multi-party, seated conversations suggest that sitting postures, leg and foot movements are also systematically related to patterns of social interaction. This thesis addressees the following questions: Can the textiles surrounding us measure social engagement? Can they tell who is speaking, and who, if anyone, is listening? Furthermore, how should wearable textile sensing systems be designed and what behavioural signals could textiles reveal? To address these questions, we have designed and manufactured bespoke chairs and trousers with integrated textile pressure sensors, that are introduced here. The designs are evaluated in three user studies that produce multi-modal datasets for the exploration of fine-grained interactional signals. Two approaches to using these bespoke textile sensors are explored. First, hand crafted sensor patches in chair covers serve to distinguish speakers and listeners. Second, a pressure sensitive matrix in custom-made smart trousers is developed to detect static sitting postures, dynamic bodily movement, as well as basic conversational states. Statistical analyses, machine learning approaches, and ethnographic methods show that by moni- toring patterns of pressure change alone it is possible to not only classify postures with high accuracy, but also to identify a wide range of behaviours reliably in individuals and groups. These findings es- tablish textiles as a novel, wearable sensing system for applications in social sciences, and contribute towards a better understanding of nonverbal communication, especially the significance of posture shifts when seated. If chairs know who is speaking, if our trousers can capture our social engagement, what role can smart textiles have in the future of human interaction? How can we build new ways to map social ecologies and tailor interactions
Shear-promoted drug encapsulation into red blood cells: a CFD model and μ-PIV analysis
The present work focuses on the main parameters that influence shear-promoted encapsulation of drugs into erythrocytes. A CFD model was built to investigate the fluid dynamics of a suspension of particles flowing in a commercial micro channel. Micro Particle Image Velocimetry (μ-PIV) allowed to take into account for the real properties of the red blood cell (RBC), thus having a deeper understanding of the process. Coupling these results with an analytical diffusion model, suitable working conditions were defined for different values of haematocrit
NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS
Historically, communication implies the transfer of information between bodies, yet this
phenomenon is constantly adapting to new technological and cultural standards. In a
digital context, it’s commonplace to envision systems that revolve around verbal modalities.
However, behavioural analysis grounded in psychology research calls attention to
the emotional information disclosed by non-verbal social cues, in particular, actions that
are involuntary. This notion has circulated heavily into various interdisciplinary computing
research fields, from which multiple studies have arisen, correlating non-verbal
activity to socio-affective inferences. These are often derived from some form of motion
capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that
occur from inside the body.
This thesis proposes a motivation and methodology for using physiological sensory
data as an expressive resource for technology-mediated interactions. Initialised from a
thorough discussion on state-of-the-art technologies and established design principles
regarding this topic, then applied to a novel approach alongside a selection of practice
works to compliment this. We advocate for aesthetic experience, experimenting with
abstract representations. Atypically from prevailing Affective Computing systems, the
intention is not to infer or classify emotion but rather to create new opportunities for rich
gestural exchange, unconfined to the verbal domain.
Given the preliminary proposition of non-representation, we justify a correspondence
with modern Machine Learning and multimedia interaction strategies, applying an iterative,
human-centred approach to improve personalisation without the compromising
emotional potential of bodily gesture. Where related studies in the past have successfully
provoked strong design concepts through innovative fabrications, these are typically limited
to simple linear, one-to-one mappings and often neglect multi-user environments;
we foresee a vast potential. In our use cases, we adopt neural network architectures to
generate highly granular biofeedback from low-dimensional input data.
We present the following proof-of-concepts: Breathing Correspondence, a wearable
biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance;
and Anti-Social Distancing Ensemble, an installation for public space interventions,
analysing physical distance to generate a collective soundscape. Key findings are
extracted from the individual reports to formulate an extensive technical and theoretical
framework around this topic. The projects first aim to embrace some alternative perspectives
already established within Affective Computing research. From here, these concepts
evolve deeper, bridging theories from contemporary creative and technical practices with
the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação
entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões
tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram
em torno de modalidades verbais. Contudo, a análise comportamental fundamentada
na investigação psicológica chama a atenção para a informação emocional revelada por
sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou
fortemente em vários campos interdisciplinares de investigação na área das ciências da
computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal
com inferências sócio-afectivas. Estes são frequentemente derivados de alguma
forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas
“invisíveis” que ocorrem no interior do corpo.
Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais
fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia.
Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios
de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem,
juntamente com uma selecção de trabalhos práticos, para complementar esta.
Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente
aos sistemas de Computação Afectiva predominantes, a intenção não é inferir
ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual,
não confinada ao domínio verbal.
Dada a proposta preliminar de não representação, justificamos uma correspondência
com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma
abordagem iterativa e centrada no ser humano para melhorar a personalização sem o
potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores
demonstraram com sucesso conceitos de design fortes através de fabricações
inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um,
e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos
um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes
neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão.
Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema
de biofeedback wearable inspirado nos princípios de design somaestético; Latent
Steps, um modelo autoencoder em tempo real para representar experiências corporais
a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing
Ensemble, uma instalação para intervenções no espaço público, analisando a
distância física para gerar uma paisagem sonora colectiva. Os principais resultados são
extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado
para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas
perspectivas alternativas às que já estão estabelecidas no âmbito da investigação
da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente,
fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o
avanço das tecnologias biomédicas
Interactions Between Patterns of Gamer Behaviors and Time-on-Task for Mathematics Remediation in a Game-based HIVE
As the presence of digital game-based learning increases in United States classrooms, understanding their impact on achievement is critical. Digital games for learning offer many potential benefits, including reducing the number of students trapped in a remediation cycle, a contributor to college dropout. Despite the recognized potential of game-based learning, few researchers have explored the relationships between specific patterns of behaviors and types of digital game-based learning environments. The underlying theory for this study was patterns of gamer behaviors may predict in-game behaviors. Archival, third-party data regarding The Lost Function - Episode 1: Sum of the Forgotten Minds by Advanced Training & Learning Technology, LLC was used in this study. Using 4 case groups at the high school and college levels (n=114), self-reported levels of the 3 patterns of gamer behaviors, gender, and age-band were analyzed using multiple regression to determine relationships to time-on-task in a game-based highly interactive virtual environment, designed for mathematics remediation. While the results were inconclusive, this study supported the existing literature regarding gender differences and the lack of mutual exclusivity in behavior typing. Recommendations include additional research in how the statements used in the 3-factor model may be adjusted to allow for a broader population of game players. The social change implication is that further understanding of the relationship between learner traits and digital learning environment may assist educators that employ digital game-based learning a way to better align learners to the most appropriate digital learning environment, thereby increases their chances at success