30 research outputs found

    Hand Gestures Recognition for Human-Machine Interfaces: A Low-Power Bio-Inspired Armband

    Get PDF
    Hand gesture recognition has recently increased its popularity as Human-Machine Interface (HMI) in the biomedical field. Indeed, it can be performed involving many different non-invasive techniques, e.g., surface ElectroMyoGraphy (sEMG) or PhotoPlethysmoGraphy (PPG). In the last few years, the interest demonstrated by both academia and industry brought to a continuous spawning of commercial and custom wearable devices, which tried to address different challenges in many application fields, from tele-rehabilitation to sign language recognition. In this work, we propose a novel 7-channel sEMG armband, which can be employed as HMI for both serious gaming control and rehabilitation support. In particular, we designed the prototype focusing on the capability of our device to compute the Average Threshold Crossing (ATC) parameter, which is evaluated by counting how many times the sEMG signal crosses a threshold during a fixed time duration (i.e., 130 ms), directly on the wearable device. Exploiting the event-driven characteristic of the ATC, our armband is able to accomplish the on-board prediction of common hand gestures requiring less power w.r.t. state of the art devices. At the end of an acquisition campaign that involved the participation of 26 people, we obtained an average classifier accuracy of 91.9% when aiming to recognize in real time 8 active hand gestures plus the idle state. Furthermore, with 2.92mA of current absorption during active functioning and 1.34mA prediction latency, this prototype confirmed our expectations and can be an appealing solution for long-term (up to 60 h) medical and consumer applications

    Methods and Technologies for the Analysis and Interactive Use of Body Movements in Instrumental Music Performance

    Get PDF
    List of related publications: http://www.federicovisi.com/publications/A constantly growing corpus of interdisciplinary studies support the idea that music is a complex multimodal medium that is experienced not only by means of sounds but also through body movement. From this perspective, musical instruments can be seen as technological objects coupled with a repertoire of performance gestures. This repertoire is part of an ecological knowledge shared by musicians and listeners alike. It is part of the engine that guides musical experience and has a considerable expressive potential. This thesis explores technical and conceptual issues related to the analysis and creative use of music-related body movements in instrumental music performance. The complexity of this subject required an interdisciplinary approach, which includes the review of multiple theoretical accounts, quantitative and qualitative analysis of data collected in motion capture laboratories, the development and implementation of technologies for the interpretation and interactive use of motion data, and the creation of short musical pieces that actively employ the movement of the performers as an expressive musical feature. The theoretical framework is informed by embodied and enactive accounts of music cognition as well as by systematic studies of music-related movement and expressive music performance. The assumption that the movements of a musician are part of a shared knowledge is empirically explored through an experiment aimed at analysing the motion capture data of a violinist performing a selection of short musical excerpts. A group of subjects with no prior experience playing the violin is then asked to mime a performance following the audio excerpts recorded by the violinist. Motion data is recorded, analysed, and compared with the expert’s data. This is done both quantitatively through data analysis xii as well as qualitatively by relating the motion data to other high-level features and structures of the musical excerpts. Solutions to issues regarding capturing and storing movement data and its use in real-time scenarios are proposed. For the interactive use of motion-sensing technologies in music performance, various wearable sensors have been employed, along with different approaches for mapping control data to sound synthesis and signal processing parameters. In particular, novel approaches for the extraction of meaningful features from raw sensor data and the use of machine learning techniques for mapping movement to live electronics are described. To complete the framework, an essential element of this research project is the com- position and performance of études that explore the creative use of body movement in instrumental music from a Practice-as-Research perspective. This works as a test bed for the proposed concepts and techniques. Mapping concepts and technologies are challenged in a scenario constrained by the use of musical instruments, and different mapping ap- proaches are implemented and compared. In addition, techniques for notating movement in the score, and the impact of interactive motion sensor systems in instrumental music practice from the performer’s perspective are discussed. Finally, the chapter concluding the part of the thesis dedicated to practical implementations describes a novel method for mapping movement data to sound synthesis. This technique is based on the analysis of multimodal motion data collected from multiple subjects and its design draws from the theoretical, analytical, and practical works described throughout the dissertation. Overall, the parts and the diverse approaches that constitute this thesis work in synergy, contributing to the ongoing discourses on the study of musical gestures and the design of interactive music systems from multiple angles

    INTERACTIVE SONIFICATION STRATEGIES FOR THE MOTION AND EMOTION OF DANCE PERFORMANCES

    Get PDF
    The Immersive Interactive SOnification Platform, or iISoP for short, is a research platform for the creation of novel multimedia art, as well as exploratory research in the fields of sonification, affective computing, and gesture-based user interfaces. The goal of the iISoP’s dancer sonification system is to “sonify the motion and emotion” of a dance performance via musical auditory display. An additional goal of this dissertation is to develop and evaluate musical strategies for adding layer of emotional mappings to data sonification. The result of the series of dancer sonification design exercises led to the development of a novel musical sonification framework. The overall design process is divided into three main iterative phases: requirement gathering, prototype generation, and system evaluation. For the first phase help was provided from dancers and musicians in a participatory design fashion as domain experts in the field of non-verbal affective communication. Knowledge extraction procedures took the form of semi-structured interviews, stimuli feature evaluation, workshops, and think aloud protocols. For phase two, the expert dancers and musicians helped create test-able stimuli for prototype evaluation. In phase three, system evaluation, experts (dancers, musicians, etc.) and novice participants were recruited to provide subjective feedback from the perspectives of both performer and audience. Based on the results of the iterative design process, a novel sonification framework that translates motion and emotion data into descriptive music is proposed and described

    Learning Algorithm Design for Human-Robot Skill Transfer

    Get PDF
    In this research, we develop an intelligent learning scheme for performing human-robot skills transfer. Techniques adopted in the scheme include the Dynamic Movement Prim- itive (DMP) method with Dynamic Time Warping (DTW), Gaussian Mixture Model (G- MM) with Gaussian Mixture Regression (GMR) and the Radical Basis Function Neural Networks (RBFNNs). A series of experiments are conducted on a Baxter robot, a NAO robot and a KUKA iiwa robot to verify the effectiveness of the proposed design.During the design of the intelligent learning scheme, an online tracking system is de- veloped to control the arm and head movement of the NAO robot using a Kinect sensor. The NAO robot is a humanoid robot with 5 degrees of freedom (DOF) for each arm. The joint motions of the operator’s head and arm are captured by a Kinect V2 sensor, and this information is then transferred into the workspace via the forward and inverse kinematics. In addition, to improve the tracking performance, a Kalman filter is further employed to fuse motion signals from the operator sensed by the Kinect V2 sensor and a pair of MYO armbands, so as to teleoperate the Baxter robot. In this regard, a new strategy is developed using the vector approach to accomplish a specific motion capture task. For instance, the arm motion of the operator is captured by a Kinect sensor and programmed through a processing software. Two MYO armbands with embedded inertial measurement units are worn by the operator to aid the robots in detecting and replicating the operator’s arm movements. For this purpose, the armbands help to recognize and calculate the precise velocity of motion of the operator’s arm. Additionally, a neural network based adaptive controller is designed and implemented on the Baxter robot to illustrate the validation forthe teleoperation of the Baxter robot.Subsequently, an enhanced teaching interface has been developed for the robot using DMP and GMR. Motion signals are collected from a human demonstrator via the Kinect v2 sensor, and the data is sent to a remote PC for teleoperating the Baxter robot. At this stage, the DMP is utilized to model and generalize the movements. In order to learn from multiple demonstrations, DTW is used for the preprocessing of the data recorded on the robot platform, and GMM is employed for the evaluation of DMP to generate multiple patterns after the completion of the teaching process. Next, we apply the GMR algorithm to generate a synthesized trajectory to minimize position errors in the three dimensional (3D) space. This approach has been tested by performing tasks on a KUKA iiwa and a Baxter robot, respectively.Finally, an optimized DMP is added to the teaching interface. A character recombination technology based on DMP segmentation that uses verbal command has also been developed and incorporated in a Baxter robot platform. To imitate the recorded motion signals produced by the demonstrator, the operator trains the Baxter robot by physically guiding it to complete the given task. This is repeated five times, and the generated training data set is utilized via the playback system. Subsequently, the DTW is employed to preprocess the experimental data. For modelling and overall movement control, DMP is chosen. The GMM is used to generate multiple patterns after implementing the teaching process. Next, we employ the GMR algorithm to reduce position errors in the 3D space after a synthesized trajectory has been generated. The Baxter robot, remotely controlled by the user datagram protocol (UDP) in a PC, records and reproduces every trajectory. Additionally, Dragon Natural Speaking software is adopted to transcribe the voice data. This proposed approach has been verified by enabling the Baxter robot to perform a writing task of drawing robot has been taught to write only one character

    Proficiency-aware systems

    Get PDF
    In an increasingly digital world, technological developments such as data-driven algorithms and context-aware applications create opportunities for novel human-computer interaction (HCI). We argue that these systems have the latent potential to stimulate users and encourage personal growth. However, users increasingly rely on the intelligence of interactive systems. Thus, it remains a challenge to design for proficiency awareness, essentially demanding increased user attention whilst preserving user engagement. Designing and implementing systems that allow users to become aware of their own proficiency and encourage them to recognize learning benefits is the primary goal of this research. In this thesis, we introduce the concept of proficiency-aware systems as one solution. In our definition, proficiency-aware systems use estimates of the user's proficiency to tailor the interaction in a domain and facilitate a reflective understanding for this proficiency. We envision that proficiency-aware systems leverage collected data for learning benefit. Here, we see self-reflection as a key for users to become aware of necessary efforts to advance their proficiency. A key challenge for proficiency-aware systems is the fact that users often have a different self-perception of their proficiency. The benefits of personal growth and advancing one's repertoire might not necessarily be apparent to users, alienating them, and possibly leading to abandoning the system. To tackle this challenge, this work does not rely on learning strategies but rather focuses on the capabilities of interactive systems to provide users with the necessary means to reflect on their proficiency, such as showing calculated text difficulty to a newspaper editor or visualizing muscle activity to a passionate sportsperson. We first elaborate on how proficiency can be detected and quantified in the context of interactive systems using physiological sensing technologies. Through developing interaction scenarios, we demonstrate the feasibility of gaze- and electromyography-based proficiency-aware systems by utilizing machine learning algorithms that can estimate users' proficiency levels for stationary vision-dominant tasks (reading, information intake) and dynamic manual tasks (playing instruments, fitness exercises). Secondly, we show how to facilitate proficiency awareness for users, including design challenges on when and how to communicate proficiency. We complement this second part by highlighting the necessity of toolkits for sensing modalities to enable the implementation of proficiency-aware systems for a wide audience. In this thesis, we contribute a definition of proficiency-aware systems, which we illustrate by designing and implementing interactive systems. We derive technical requirements for real-time, objective proficiency assessment and identify design qualities of communicating proficiency through user reflection. We summarize our findings in a set of design and engineering guidelines for proficiency awareness in interactive systems, highlighting that proficiency feedback makes performance interpretable for the user.In einer zunehmend digitalen Welt schaffen technologische Entwicklungen - wie datengesteuerte Algorithmen und kontextabhängige Anwendungen - neuartige Interaktionsmöglichkeiten mit digitalen Geräten. Jedoch verlassen sich Nutzer oftmals auf die Intelligenz dieser Systeme, ohne dabei selbst auf eine persönliche Weiterentwicklung hinzuwirken. Wird ein solches Vorgehen angestrebt, verlangt dies seitens der Anwender eine erhöhte Aufmerksamkeit. Es ist daher herausfordernd, ein entsprechendes Design für Kompetenzbewusstsein (Proficiency Awareness) zu etablieren. Das primäre Ziel dieser Arbeit ist es, eine Methodik für das Design und die Implementierung von interaktiven Systemen aufzustellen, die Nutzer dabei unterstützen über ihre eigene Kompetenz zu reflektieren, um dadurch Lerneffekte implizit wahrnehmen können. Diese Arbeit stellt ein Konzept für fähigkeitsbewusste Systeme (proficiency-aware systems) vor, welche die Fähigkeiten von Nutzern abschätzen, die Interaktion entsprechend anpassen sowie das Bewusstsein der Nutzer über deren Fähigkeiten fördern. Hierzu sollten die Systeme gesammelte Daten von Nutzern einsetzen, um Lerneffekte sichtbar zu machen. Die Möglichkeit der Anwender zur Selbstreflexion ist hierbei als entscheidend anzusehen, um als Motivation zur Verbesserung der eigenen Fähigkeiten zu dienen. Eine zentrale Herausforderung solcher Systeme ist die Tatsache, dass Nutzer - im Vergleich zur Abschätzung des Systems - oft eine divergierende Selbstwahrnehmung ihrer Kompetenz haben. Im ersten Moment sind daher die Vorteile einer persönlichen Weiterentwicklung nicht unbedingt ersichtlich. Daher baut diese Forschungsarbeit nicht darauf auf, Nutzer über vorgegebene Lernstrategien zu unterrichten, sondern sie bedient sich der Möglichkeiten interaktiver Systeme, die Anwendern die notwendigen Hilfsmittel zur Verfügung stellen, damit diese selbst über ihre Fähigkeiten reflektieren können. Einem Zeitungseditor könnte beispielsweise die aktuelle Textschwierigkeit angezeigt werden, während einem passionierten Sportler dessen Muskelaktivität veranschaulicht wird. Zunächst wird herausgearbeitet, wie sich die Fähigkeiten der Nutzer mittels physiologischer Sensortechnologien erkennen und quantifizieren lassen. Die Evaluation von Interaktionsszenarien demonstriert die Umsetzbarkeit fähigkeitsbewusster Systeme, basierend auf der Analyse von Blickbewegungen und Muskelaktivität. Hierbei kommen Algorithmen des maschinellen Lernens zum Einsatz, die das Leistungsniveau der Anwender für verschiedene Tätigkeiten berechnen. Im Besonderen analysieren wir stationäre Aktivitäten, die hauptsächlich den Sehsinn ansprechen (Lesen, Aufnahme von Informationen), sowie dynamische Betätigungen, die die Motorik der Nutzer fordern (Spielen von Instrumenten, Fitnessübungen). Der zweite Teil zeigt auf, wie Systeme das Bewusstsein der Anwender für deren eigene Fähigkeiten fördern können, einschließlich der Designherausforderungen , wann und wie das System erkannte Fähigkeiten kommunizieren sollte. Abschließend wird die Notwendigkeit von Toolkits für Sensortechnologien hervorgehoben, um die Implementierung derartiger Systeme für ein breites Publikum zu ermöglichen. Die Forschungsarbeit beinhaltet eine Definition für fähigkeitsbewusste Systeme und veranschaulicht dieses Konzept durch den Entwurf und die Implementierung interaktiver Systeme. Ferner werden technische Anforderungen objektiver Echtzeitabschätzung von Nutzerfähigkeiten erforscht und Designqualitäten für die Kommunikation dieser Abschätzungen mittels Selbstreflexion identifiziert. Zusammengefasst sind die Erkenntnisse in einer Reihe von Design- und Entwicklungsrichtlinien für derartige Systeme. Insbesondere die Kommunikation, der vom System erkannten Kompetenz, hilft Anwendern, die eigene Leistung zu interpretieren

    Accessible Integration of Physiological Adaptation in Human-Robot Interaction

    Get PDF
    Technological advancements in creating and commercializing novel unobtrusive wearable physiological sensors have generated new opportunities to develop adaptive human-robot interaction (HRI). Detecting complex human states such as engagement and stress when interacting with social agents could bring numerous advantages to creating meaningful interactive experiences. Bodily signals have classically been used for post-interaction analysis in HRI. Despite this, real-time measurements of autonomic responses have been used in other research domains to develop physiologically adaptive systems with great success; increasing user-experience, task performance, and reducing cognitive workload. This thesis presents the HRI Physio Lib, a conceptual framework, and open-source software library to facilitate the development of physiologically adaptive HRI scenarios. Both the framework and architecture of the library are described in-depth, along with descriptions of additional software tools that were developed to make the inclusion of physiological signals easier for robotics frameworks. The framework is structured around four main components for designing physiologically adaptive experimental scenarios: signal acquisition, processing and analysis; social robot and communication; and scenario and adaptation. Open-source software tools have been developed to assist in the individual creation of each described component. To showcase our framework and test the software library, we developed, as a proof-of-concept, a simple scenario revolving around a physiologically aware exercise coach, that modulates the speed and intensity of the activity to promote an effective cardiorespiratory exercise. We employed the socially assistive QT robot for our exercise scenario, as it provides a comprehensive ROS interface, making prototyping of behavioral responses fast and simple. Our exercise routine was designed following guidelines by the American College of Sports Medicine. We describe our physiologically adaptive algorithm and propose an alternative second one with stochastic elements. Finally, a discussion about other HRI domains where the addition of a physiologically adaptive mechanism could result in novel advances in interaction quality is provided as future extensions for this work. From the literature, we identified improving engagement, providing deeper social connections, health care scenarios, and also applications for self-driving vehicles as promising avenues for future research where a physiologically adaptive social robot could improve user experience

    An investigation of mid-air gesture interaction for older adults

    Get PDF
    Older adults (60+) face natural and gradual decline in cognitive, sensory and motor functions that are often the reason for the difficulties that older users come up against when interacting with computers. For that reason, the investigation and design of age-inclusive input methods for computer interaction is much needed and relevant due to an ageing population. The advances of motion sensing technologies and mid-air gesture interaction reinvented how individuals can interact with computer interfaces and this modality of input method is often deemed as a more “natural” and “intuitive” than using purely traditional input devices such mouse interaction. Although explored in gaming and entertainment, the suitability of mid-air gesture interaction for older users in particular is still little known. The purpose of this research is to investigate the potential of mid-air gesture interaction to facilitate computer use for older users, and to address the challenges that older adults may face when interacting with gestures in mid-air. This doctoral research is presented as a collection of papers that, together, develop the topic of ageing and computer interaction through mid-air gestures. The initial point for this research was to establish how older users differ from younger users and focus on the challenges faced by older adults when interacting with mid-air gesture interaction. Once these challenges were identified, this work aimed to explore a series of usability challenges and opportunities to further develop age-inclusive interfaces based on mid-air gesture interaction. Through a series of empirical studies, this research intends to provide recommendations for designing mid-air gesture interaction that better take into consideration the needs and skills of the older population and aims to contribute to the advance of age-friendly interfaces

    Enhancing the Quality and Motivation of Physical Exercise Using Real-Time Sonification

    Get PDF
    This research project investigated the use of real-time sonification as a way to improve the quality and motivation of biceps curl exercise among healthy young participants. A sonification system was developed featuring an elec- tromyography (EMG) sensor and Microsoft Kinect camera. During exercise, muscular and kinematic data were collected and sent to custom design sonifi- cation software developed using Max to generate real-time auditory feedback. The software provides four types of output sound in consideration of personal preference and long-term use. Three experiments were carried out. The pilot study examined the sonifi- cation system and gathered the users’ comments about their experience of each type of sound in relation to its functionality and aesthetics. A 3-session between-subjects test and an 8-session within-subjects comparative test were conducted to compared the exercise quality and motivation between two conditions: with and without the real-time sonification. Overall, several conclusions are drawn based on the experimental results: The sonification improved participants’ pace of biceps curl significantly. No significant effect was found for the effect on vertical movement range. Participants expended more effort in training with the presence of sonification. Analysis of sur- veys indicated a higher motivation and willingness when exercising with the sonification. The results reflect a wider potential for applications including general fitness, physiotherapy and elite sports training

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally
    corecore