20 research outputs found
Towards a Characterisation of Emotional Intent During Scripted Scenes Using In-ear Movement Sensors
Theatre provides a unique environment in which to obtain detailed data on social interactions in a controlled and repeatable manner.This work introduces a method for capturing and characterising the underlying emotional intent of performers in a scripted sceneusing in-ear accelerometers. Each scene is acted with different underlying emotional intentions using the theatrical technique ofActioning. The goal of the work is to uncover characteristics in the joint movement patterns that reveal information on the positive ornegative valence of these intentions. Preliminary findings over 3x12 (Covid-19 restricted) non-actor trials suggests people are moreenergetic and more in-sync when using positive versus negative intentions
Towards Respiration Rate Monitoring Using an In-Ear Headphone Inertial Measurement Unit
State-of-the-art respiration tracking devices require specialized equipment, making them impractical for every day at-home respiration sensing. In this paper, we present the first system for sensing respiratory rates using in-ear headphone inertial measurement units (IMU). The approach is based on technology already available in commodity devices: the eSense headphones. Our processing pipeline combines several existing approaches to clean noisy data and calculate respiratory rates on 20-second windows. In a study with twelve participants, we compare accelerometer and gyroscope based sensing and employ pressure-based measurement with nasal cannulas as ground truth. Our results indicate a mean absolute error of 2.62 CPM (acc) and 2.55 CPM (gyro). This overall accuracy is comparable to previous approaches using accelerometer-based sensing, but we observe a higher relative error for the gyroscope. In contrast to related work using other sensor positions, we can not report significant differences between the two modalities or the three postures standing, sitting, and lying on the back (supine). However, in general, performance varies drastically between participants
Design Space and Usability of Earable Prototyping
Earable computing gains growing attention within research and becomes ubiquitous in society. However, there is an emerging need for prototyping devices as critical drivers of innovation. In our work, we reviewed the features of existing earable platforms. Based on 24 publications, we characterized the design space of earable prototyping. We used the open eSense platform (6-axis IMU, auditory I/O) to evaluate the problem-based learning usability of non-experts. We collected data from 79 undergraduate students who developed 39 projects. Our questionnaire-based results suggest that the platform creates interest in the subject matter and supports self-directed learning. The projects align with the research space, indicating ease of use, but lack contributions for more challenging topics. Additionally, many projects included games not present in current research. The average SUS score of the platform was 67.0. The majority of problems are technical issues (e.g., connecting, playing music)
Motion Coupling of Earable Devices in Camera View
Earables, earphones augmented with inertial sensors and real-time data accessibility, provide the opportunity for private audio channels in public settings. One of the main challenges of achieving this goal is to correctly associate which device belongs to which user without prior information. In this paper, we explore how motion of an earable, as measured by the on-board accelerometer, can be correlated against detected faces from a webcam to accurately match which user is wearing the device. We conduct a data collection and explore which type of user movement can be accurately detected using this approach, and investigate how varying the speed of the movement affects detection rates. Our results show that the approach achieves greater detection results for faster movements, and that it can differentiate the same movement across different participants with a detection rate of 86%, increasing to 92% when differentiating a movement against others
OpenEarable:Open Hardware Earable Sensing Platform
Earables are ear-worn devices that offer functionalities beyond basic audio in- and output. In this paper we present the ongoing development of a new, open-source, Arduino-based earable platform called OpenEarable. It is based on standard components, is easy to manufacture and costs roughly $40 per device at batch size ten. We present the first version of the device which is equipped with a series of sensors and actuators: a 3-axis accelerometer and gyroscope, an ear canal pressure and temperature sensor, an inward facing ultrasonic microphone as well as a speaker, a push button, and a controllable LED. We demonstrate the versatility of the prototyping platform through three different example application scenarios. In sum, OpenEarable offers a general-purpose, open sensing platform for earable research and development.<br/
As You Are, So Shall You Move Your Head: A System-Level Analysis between Head Movements and Corresponding Traits and Emotions
Identifying physical traits and emotions based on system-sensed physical
activities is a challenging problem in the realm of human-computer interaction.
Our work contributes in this context by investigating an underlying connection
between head movements and corresponding traits and emotions. To do so, we
utilize a head movement measuring device called eSense, which gives
acceleration and rotation of a head. Here, first, we conduct a thorough study
over head movement data collected from 46 persons using eSense while inducing
five different emotional states over them in isolation. Our analysis reveals
several new head movement based findings, which in turn, leads us to a novel
unified solution for identifying different human traits and emotions through
exploiting machine learning techniques over head movement data. Our analysis
confirms that the proposed solution can result in high accuracy over the
collected data. Accordingly, we develop an integrated unified solution for
real-time emotion and trait identification using head movement data leveraging
outcomes of our analysis.Comment: 9 pages, 7 figures, NSysS 201
Recommended from our members
Earables for Detection of Bruxism: A Feasibility Study
Bruxism is a disorder characterised by teeth grinding and clenching, and many
bruxism sufferers are not aware of this disorder until their dental health
professional notices permanent teeth wear. Stress and anxiety are often listed
among contributing factors impacting bruxism exacerbation, which may explain
why the COVID-19 pandemic gave rise to a bruxism epidemic. It is essential to
develop tools allowing for the early diagnosis of bruxism in an unobtrusive
manner. This work explores the feasibility of detecting bruxism-related events
using earables in a mimicked in-the-wild setting. Using inertial measurement
unit for data collection, we utilise traditional machine learning for teeth
grinding and clenching detection. We observe superior performance of models
based on gyroscope data, achieving an 88% and 66% accuracy on grinding and
clenching activities, respectively, in a controlled environment, and 76% and
73% on grinding and clenching, respectively, in an in-the-wild environment
Recommended from our members
Detecting Freezing of Gait with earables trained from VR motion capture data
Freezing of Gait (FoG) is a common disabling motor symptom in Parkinson’s Disease (PD). Auditory cueing provided when FoG is detected can help mitigate the condition, for which earables are potentially well suited as they are capable of motion sensing and audio feedback. However, there are no studies so far on FoG detection at the ear. Immersive Virtual Reality (VR) combined with video-based full-body motion capture has been increasingly used to run FoG studies in the medical community. While there are motion capture datasets collected in such an environment, there are no datasets collected from IMU placed at the ear. In this paper, we show how to transfer such motion capture datasets to IMU domain and evaluate the capability of FoG detection from ear position in an immersive VR environment. Using a dataset of 6 PD patients, we compare machine learning-based FoG detection applied to the motion capture data and the virtual IMU. We have achieved an average sensitivity of 80.3% and an average specificity of 87.6% on FoG detection using the virtual earable IMU, which indicates the potential of FoG detection at the ear. This study is a step toward user-studies with earables in the VR setup, prior to conducting research in over-ground walking and everyday life
Recommended from our members
Hierarchical feature recovery for robust human activity recognition in body Sensor networks
With the advances in Body Sensor Networks (BSNs) and textile-integrated sensing, more sensor data becomes available from which human activities are recognised. However, some sensors may become unavailable unexpectedly in practice. Previous work proposed to complement the features of a missing sensor with regression-based methods but considered only up to one sensor missing and thus lacked a mechanism for selecting relevant sensors when multiple sensors were missing. The number of unique combinations of missing sensors increases exponentially when multiple sensors may be missing. To handle this, we propose a Hierarchical Feature Recovery (HFR) approach. We first assess the dependencies between sensors by comparing the feature mapping accuracy between each sensor and then evaluate the HFR approach on a dataset of activities of daily living with 17 gestures using 14 motion sensors. Our HFR method can alleviate classification performance drop by up to 8.3 pp compared to a baseline method