36 research outputs found

    Designing a gesture-sound wearable system to motivate physical activity by altering body perception

    Get PDF
    People, through their bodily actions, engage in sensorimotor loops that connect them to the world and to their own bodies. People's brains integrate the incoming sensory information to form mental representations of their body appearance and capabilities. Technology provides exceptional opportunities to tweak sensorimotor loops and provide people with different experiences of their bodies. We recently showed that real-time sound feedback on one's movement (sonic avatar) can be used for sensory alteration of people's body perception, and in turn provoke enhanced motor behaviour, confidence and motivation for physical activity (PA) in people while increasing their positive emotions towards their own bodies. Here we describe the design process of a wearable prototype that aims to investigate how we can overcome known body-perception-related psychological barriers to PA by employing action-sound loops. The prototype consists of sensors that capture people's bodily actions and a gesture-sound palette that allows different action-sound mappings. Grounded in neuroscientific, clinical and sports psychology studies on body perception and PA, the ultimate design aim is to enhance PA in inactive populations by provoking changes on their bodily experience

    Smartphone-Assessed Movement Predicts Music Properties : Towards Integrating Embodied Music Cognition into Music Recommender Services via Accelerometer

    Get PDF
    Numerous studies have shown a close relationship between move- ment and music [7], [17], [11], [14], [16], [3], [8]. That is why Leman calls for new mediation technologies to query music in a corporeal way [9]. Thus, the goal of the presented study was to explore how movement captured by smartphone accelerometer data can be re- lated to musical properties. Participants (N = 23, mean age = 34.6 yrs, SD = 13.7 yrs, 13 females, 10 males) moved a smartphone to 15 musical stimuli of 20s length presented in random order. Mo- tion features related to tempo, smoothness, size, regularity, and direction were extracted from accelerometer data to predict the musical qualities “rhythmicity", “pitch level + range" and "complex- ity“ assessed by three music experts. Motion features selected by a 20-fold lasso predicted the musical properties to the following degrees “rhythmicity" (R2 : .47), pitch level and range (R2 : .03) and complexity (R2 : .10). As a consequence, we conclude that music properties can be predicted from the movement it evoked, and that an embodied approach to Music Information Retrieval is feasible

    Creating Virtual Characters

    Get PDF
    An encounter with a virtual person can be one of the most compelling experiences in immersive virtual reality, as Mel Slater and his group have shown in many experiments on social interaction in VR. Much of this is due to virtual reality's ability to accurately represent body language, since participants can share a 3D space with a character. However, creating virtual characters capable of body language is a challenging task. It is a tacit, embodied skill that cannot be well represented in code. This paper surveys a series of experiments performed by Mel Slater and colleagues that show the power of Virtual Characters in VR and summarizes details of the technical infrastructure used, and Slater's theories of why virtual characters are effective. It they discusses the issues involved in creating virtual characters and the type of tool required. It concludes by proposing that Interactive Machine Learning can provide this type of tool

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation

    Developing and evaluating a model for human motion to facilitate low degree-of-freedom robot imitation of human movement

    Get PDF
    Imitation of human motion is a necessary activity for robots to integrate seamlessly into human-facing environments. While perfect replication is not possible, especially for low degree-of-freedom (DOF) robots, this thesis presents a model for human motion that achieves perceptual imitation. Motion capture data of dyadic interactions was first analyzed to quantify a characteristic of human motion observed in the movement. The leaning of the spine, or verticality, was found to correlate with these movement observations. Verticality was then used to inspire a low-DOF model of human motion using motion capture that can be used to command the movement of simulated robots. Experiments were developed to test users’ perception of the imitation by these 3 and 4-DOF simulated robots of human motion. Verticality was preferred in an initial study over artificially generated motion for the higher DOF robot, Broombot, which was preferred over the lower DOF robot, Rollbot. A study was developed to test the preferences of users when the mapping between human and robot motion was changed for variable human motion. Motion capture-based motion was preferred over artificially generated motion, and a sub-group of respondents who preferred verticality and were more engaged in the survey was found. Since the experiments were performed using motion capture data from a trained ballet dancer, a discussion of the differences between two Indian classical dance styles is included that shows that verticality alone is not representative of all motion and prompts a further analysis to develop socially adaptive robot behavior. In-progress and future work include a hardware implementation that will allow real-time motion capture data to drive simulated and/or physical robots. Menagerie is an in-development performance using the tools developed in this thesis that can include a human with simulated and/or physical robots moving together

    End-user action-sound mapping design for mid-air music performance

    Get PDF
    How to design the relationship between a performer’s actions and an instrument’s sound response has been a consistent theme in Digital Musical Instrument (DMI) research. Previously, mapping was seen purely as an activity for DMI creators, but more recent work has exposed mapping design to DMI musicians, with many in the field introducing soware to facilitate end-user mapping, democratising this aspect of the DMI design process. This end-user mapping process provides musicians with a novel avenue for creative expression, and offers a unique opportunity to examine how practising musicians approach mapping design.Most DMIs suffer from a lack of practitioners beyond their initial designer, and there are few that are used by professional musicians over extended periods. The Mi.Mu Gloves are one of the few examples of a DMI that is used by a dedicated group of practising musicians, many of whom use the instrument in their professional practice, with a significant aspect of creative practice with the gloves being end-user mapping design. The research presented in this dissertation investigates end-user mapping practice with the Mi.Mu Gloves, and what influences glove musicians’ design decisions based on the context of their music performance practice, examining the question: How do end-users of a glove-based mid-air DMI design action–sound mapping strategies for musical performance?In the first study, the mapping practice of existing members of the Mi.Mu Glove community is examined. Glove musicians performed a mapping design task, which revealed marked differences in the mapping designs of expert and novice glove musicians, with novices designing mappings that evoked conceptual metaphors of spatial relationships between movement and music, while more experienced musicians focused on designing ergonomic mappings that minimised performer error.The second study examined the initial development period of glove mapping practice. A group of novice glove musicians were tracked in a longitudinal study. The findings supported the previous observation that novices designed mappings using established conceptual metaphors, and revealed that transparency and the audience’s ability to perceive their mappings was important to novice glove musicians. However, creative mapping was hindered by system reliability and the novices’ poorly trained posture recognition.The third study examined the mapping practice of expert glove musicians, who took part in a series of interviews. Findings from this study supported earlier observations that expert glove musicians focus on error minimisation and ergonomic, simple controls, but also revealed that the expert musicians embellished these simple controls with performative ancillary gestures to communicate aesthetic meaning. The expert musicians also suffered from system reliability, and had developed a series of gestural techniques to mitigate accidental triggering.The fourth study examined the effects of system-related error in depth. A laboratory study was used to investigate how system-related errors impacted a musician’s ability to acquire skill with the gloves, finding that a 5% rate of system error had a significant effect on skill acquisition.Learning from these findings, a series of design heuristics are presented, applicable for use in the fields of DMI design, mid-air interaction design and end-user mapping design

    Emerging opportunities provided by technology to advance research in child health globally

    Get PDF
    CITATION: van Heerden, A. et al. 2020. Emerging Opportunities Provided by Technology to Advance Research in Child Health Globally. Global Pediatric Health, 7:1-9. doi:10.1177/2333794X20917570.The original publication is available at https://journals.sagepub.com/home/gphCurrent approaches to longitudinal assessment of children’s developmental and psychological well-being, as mandated in the United Nations Sustainable Development Goals, are expensive and time consuming. Substantive understanding of global progress toward these goals will require a suite of new robust, cost-effective research tools designed to assess key developmental processes in diverse settings. While first steps have been taken toward this end through efforts such as the National Institutes of Health’s Toolbox, experience-near approaches including naturalistic observation have remained too costly and time consuming to scale to the population level. This perspective presents 4 emerging technologies with high potential for advancing the field of child health and development research, namely (1) affective computing, (2) ubiquitous computing, (3) eye tracking, and (4) machine learning. By drawing attention of scientists, policy makers, investors/funders, and the media to the applications and potential risks of these emerging opportunities, we hope to inspire a fresh wave of innovation and new solutions to the global challenges faced by children and their families.https://journals.sagepub.com/doi/10.1177/2333794X20917570Publishers versio

    Emerging Opportunities Provided by Technology to Advance Research in Child Health Globally.

    Get PDF
    Current approaches to longitudinal assessment of children's developmental and psychological well-being, as mandated in the United Nations Sustainable Development Goals, are expensive and time consuming. Substantive understanding of global progress toward these goals will require a suite of new robust, cost-effective research tools designed to assess key developmental processes in diverse settings. While first steps have been taken toward this end through efforts such as the National Institutes of Health's Toolbox, experience-near approaches including naturalistic observation have remained too costly and time consuming to scale to the population level. This perspective presents 4 emerging technologies with high potential for advancing the field of child health and development research, namely (1) affective computing, (2) ubiquitous computing, (3) eye tracking, and (4) machine learning. By drawing attention of scientists, policy makers, investors/funders, and the media to the applications and potential risks of these emerging opportunities, we hope to inspire a fresh wave of innovation and new solutions to the global challenges faced by children and their families

    NON-VERBAL COMMUNICATION WITH PHYSIOLOGICAL SENSORS. THE AESTHETIC DOMAIN OF WEARABLES AND NEURAL NETWORKS

    Get PDF
    Historically, communication implies the transfer of information between bodies, yet this phenomenon is constantly adapting to new technological and cultural standards. In a digital context, it’s commonplace to envision systems that revolve around verbal modalities. However, behavioural analysis grounded in psychology research calls attention to the emotional information disclosed by non-verbal social cues, in particular, actions that are involuntary. This notion has circulated heavily into various interdisciplinary computing research fields, from which multiple studies have arisen, correlating non-verbal activity to socio-affective inferences. These are often derived from some form of motion capture and other wearable sensors, measuring the ‘invisible’ bioelectrical changes that occur from inside the body. This thesis proposes a motivation and methodology for using physiological sensory data as an expressive resource for technology-mediated interactions. Initialised from a thorough discussion on state-of-the-art technologies and established design principles regarding this topic, then applied to a novel approach alongside a selection of practice works to compliment this. We advocate for aesthetic experience, experimenting with abstract representations. Atypically from prevailing Affective Computing systems, the intention is not to infer or classify emotion but rather to create new opportunities for rich gestural exchange, unconfined to the verbal domain. Given the preliminary proposition of non-representation, we justify a correspondence with modern Machine Learning and multimedia interaction strategies, applying an iterative, human-centred approach to improve personalisation without the compromising emotional potential of bodily gesture. Where related studies in the past have successfully provoked strong design concepts through innovative fabrications, these are typically limited to simple linear, one-to-one mappings and often neglect multi-user environments; we foresee a vast potential. In our use cases, we adopt neural network architectures to generate highly granular biofeedback from low-dimensional input data. We present the following proof-of-concepts: Breathing Correspondence, a wearable biofeedback system inspired by Somaesthetic design principles; Latent Steps, a real-time auto-encoder to represent bodily experiences from sensor data, designed for dance performance; and Anti-Social Distancing Ensemble, an installation for public space interventions, analysing physical distance to generate a collective soundscape. Key findings are extracted from the individual reports to formulate an extensive technical and theoretical framework around this topic. The projects first aim to embrace some alternative perspectives already established within Affective Computing research. From here, these concepts evolve deeper, bridging theories from contemporary creative and technical practices with the advancement of biomedical technologies.Historicamente, os processos de comunicação implicam a transferência de informação entre organismos, mas este fenómeno está constantemente a adaptar-se a novos padrões tecnológicos e culturais. Num contexto digital, é comum encontrar sistemas que giram em torno de modalidades verbais. Contudo, a análise comportamental fundamentada na investigação psicológica chama a atenção para a informação emocional revelada por sinais sociais não verbais, em particular, acções que são involuntárias. Esta noção circulou fortemente em vários campos interdisciplinares de investigação na área das ciências da computação, dos quais surgiram múltiplos estudos, correlacionando a actividade nãoverbal com inferências sócio-afectivas. Estes são frequentemente derivados de alguma forma de captura de movimento e sensores “wearable”, medindo as alterações bioeléctricas “invisíveis” que ocorrem no interior do corpo. Nesta tese, propomos uma motivação e metodologia para a utilização de dados sensoriais fisiológicos como um recurso expressivo para interacções mediadas pela tecnologia. Iniciada a partir de uma discussão aprofundada sobre tecnologias de ponta e princípios de concepção estabelecidos relativamente a este tópico, depois aplicada a uma nova abordagem, juntamente com uma selecção de trabalhos práticos, para complementar esta. Defendemos a experiência estética, experimentando com representações abstractas. Contrariamente aos sistemas de Computação Afectiva predominantes, a intenção não é inferir ou classificar a emoção, mas sim criar novas oportunidades para uma rica troca gestual, não confinada ao domínio verbal. Dada a proposta preliminar de não representação, justificamos uma correspondência com estratégias modernas de Machine Learning e interacção multimédia, aplicando uma abordagem iterativa e centrada no ser humano para melhorar a personalização sem o potencial emocional comprometedor do gesto corporal. Nos casos em que estudos anteriores demonstraram com sucesso conceitos de design fortes através de fabricações inovadoras, estes limitam-se tipicamente a simples mapeamentos lineares, um-para-um, e muitas vezes negligenciam ambientes multi-utilizadores; com este trabalho, prevemos um potencial alargado. Nos nossos casos de utilização, adoptamos arquitecturas de redes neurais para gerar biofeedback altamente granular a partir de dados de entrada de baixa dimensão. Apresentamos as seguintes provas de conceitos: Breathing Correspondence, um sistema de biofeedback wearable inspirado nos princípios de design somaestético; Latent Steps, um modelo autoencoder em tempo real para representar experiências corporais a partir de dados de sensores, concebido para desempenho de dança; e Anti-Social Distancing Ensemble, uma instalação para intervenções no espaço público, analisando a distância física para gerar uma paisagem sonora colectiva. Os principais resultados são extraídos dos relatórios individuais, para formular um quadro técnico e teórico alargado para expandir sobre este tópico. Os projectos têm como primeiro objectivo abraçar algumas perspectivas alternativas às que já estão estabelecidas no âmbito da investigação da Computação Afectiva. A partir daqui, estes conceitos evoluem mais profundamente, fazendo a ponte entre as teorias das práticas criativas e técnicas contemporâneas com o avanço das tecnologias biomédicas
    corecore