375 research outputs found

    Detecting Eating Episodes with an Ear-mounted Sensor

    Get PDF
    In this paper, we propose Auracle, a wearable earpiece that can automatically recognize eating behavior. More specifically, in free-living conditions, we can recognize when and for how long a person is eating. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the bone and tissue of the head. This audio data is then processed by a custom analog/digital circuit board. To ensure reliable (yet comfortable) contact between microphone and skin, all hardware components are incorporated into a 3D-printed behind-the-head framework. We collected field data with 14 participants for 32 hours in free-living conditions and additional eating data with 10 participants for 2 hours in a laboratory setting. We achieved accuracy exceeding 92.8% and F1 score exceeding 77.5% for eating detection. Moreover, Auracle successfully detected 20-24 eating episodes (depending on the metrics) out of 26 in free-living conditions. We demonstrate that our custom device could sense, process, and classify audio data in real time. Additionally, we estimateAuracle can last 28.1 hours with a 110 mAh battery while communicating its observations of eating behavior to a smartphone over Bluetooth

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    Sensing with Earables: A Systematic Literature Review and Taxonomy of Phenomena

    Get PDF
    Earables have emerged as a unique platform for ubiquitous computing by augmenting ear-worn devices with state-of-the-art sensing. This new platform has spurred a wealth of new research exploring what can be detected on a wearable, small form factor. As a sensing platform, the ears are less susceptible to motion artifacts and are located in close proximity to a number of important anatomical structures including the brain, blood vessels, and facial muscles which reveal a wealth of information. They can be easily reached by the hands and the ear canal itself is affected by mouth, face, and head movements. We have conducted a systematic literature review of 271 earable publications from the ACM and IEEE libraries. These were synthesized into an open-ended taxonomy of 47 different phenomena that can be sensed in, on, or around the ear. Through analysis, we identify 13 fundamental phenomena from which all other phenomena can be derived, and discuss the different sensors and sensing principles used to detect them. We comprehensively review the phenomena in four main areas of (i) physiological monitoring and health, (ii) movement and activity, (iii) interaction, and (iv) authentication and identification. This breadth highlights the potential that earables have to offer as a ubiquitous, general-purpose platform

    DETECTION OF HEALTH-RELATED BEHAVIOURS USING HEAD-MOUNTED DEVICES

    Get PDF
    The detection of health-related behaviors is the basis of many mobile-sensing applications for healthcare and can trigger other inquiries or interventions. Wearable sensors have been widely used for mobile sensing due to their ever-decreasing cost, ease of deployment, and ability to provide continuous monitoring. In this dissertation, we develop a generalizable approach to sensing eating-related behavior. First, we developed Auracle, a wearable earpiece that can automatically detect eating episodes. Using an off-the-shelf contact microphone placed behind the ear, Auracle captures the sound of a person chewing as it passes through the head. This audio data is then processed by a custom circuit board. We collected data with 14 participants for 32 hours in free-living conditions and achieved accuracy exceeding 92.8% and F1 score exceeding77.5% for eating detection with 1-minute resolution. Second, we adapted Auracle for measuring children’s eating behavior, and improved the accuracy and robustness of the eating-activity detection algorithms. We used this improved prototype in a laboratory study with a sample of 10 children for 60 total sessions and collected 22.3 hours of data in both meal and snack scenarios. Overall, we achieved 95.5% accuracy and 95.7% F1 score for eating detection with 1-minute resolution. Third, we developed a computer-vision approach for eating detection in free-living scenarios. Using a miniature head-mounted camera, we collected data with 10 participants for about 55 hours. The camera was fixed under the brim of a cap, pointing to the mouth of the wearer and continuously recording video (but not audio) throughout their normal daily activity. We evaluated performance for eating detection using four different Convolutional Neural Network (CNN) models. The best model achieved 90.9% accuracy and 78.7%F1 score for eating detection with 1-minute resolution. Finally, we validated the feasibility of deploying the 3D CNN model in wearable or mobile platforms when considering computation, memory, and power constraints

    Embedding a Grid of Load Cells into a Dining Table for Automatic Monitoring and Detection of Eating Events

    Get PDF
    This dissertation describes a “smart dining table” that can detect and measure consumption events. This work is motivated by the growing problem of obesity, which is a global problem and an epidemic in the United States and Europe. Chapter 1 gives a background on the economic burden of obesity and its comorbidities. For the assessment of obesity, we briefly describe the classic dietary assessment tools and discuss their drawback and the necessity of using more objective, accurate, low-cost, and in-situ automatic dietary assessment tools. We explain in short various technologies used for automatic dietary assessment such as acoustic-, motion-, or image-based systems. This is followed by a literature review of prior works related to the detection of weights and locations of objects sitting on a table surface. Finally, we state the novelty of this work. In chapter 2, we describe the construction of a table that uses an embedded grid of load cells to sense the weights and positions of objects. The main challenge is aligning the tops of adjacent load cells to within a few micrometer tolerance, which we accomplish using a novel inversion process during construction. Experimental tests found that object weights distributed across 4 to 16 load cells could be measured with 99.97±0.1% accuracy. Testing the surface for flatness at 58 points showed that we achieved approximately 4.2±0.5 um deviation among adjacent 2x2 grid of tiles. Through empirical measurements we determined that the table has a 40.2 signal-to-noise ratio when detecting the smallest expected intake amount (0.5 g) from a normal meal (approximate total weight is 560 g), indicating that a tiny amount of intake can be detected well above the noise level of the sensors. In chapter 3, we describe a pilot experiment that tests the capability of the table to monitor eating. Eleven human subjects were video recorded for ground truth while eating a meal on the table using a plate, bowl, and cup. To detect consumption events, we describe an algorithm that analyzes the grid of weight measurements in the format of an image. The algorithm segments the image into multiple objects, tracks them over time, and uses a set of rules to detect and measure individual bites of food and drinks of liquid. On average, each meal consisted of 62 consumption events. Event detection accuracy was very high, with an F1-score per subject of 0.91 to 1.0, and an F1 score per container of 0.97 for the plate and bowl, and 0.99 for the cup. The experiment demonstrates that our device is capable of detecting and measuring individual consumption events during a meal. Chapter 4 compares the capability of our new tool to monitor eating against previous works that have also monitored table surfaces. We completed a literature search and identified the three state-of-the-art methods to be used for comparison. The main limitation of all previous methods is that they used only one load cell for monitoring, so only the total surface weight can be analyzed. To simulate their operations, the weights of our grid of load cells were summed up to use the 2D data as 1D. Data were prepared according to the requirements of each method. Four metrics were used to evaluate the comparison: precision, recall, accuracy, and F1-score. Our method scored the highest in recall, accuracy, and F1-score; compared to all other methods, our method scored 13-21% higher for recall, 8-28% higher for accuracy, and 10-18% higher for F1-score. For precision, our method scored 97% that is just 1% lower than the highest precision, which was 98%. In summary, this dissertation describes novel hardware, a pilot experiment, and a comparison against current state-of-the-art tools. We also believe our methods could be used to build a similar surface for other applications besides monitoring consumption

    Earables: Wearable Computing on the Ears

    Get PDF
    Kopfhörer haben sich bei Verbrauchern durchgesetzt, da sie private Audiokanäle anbieten, zum Beispiel zum Hören von Musik, zum Anschauen der neuesten Filme während dem Pendeln oder zum freihändigen Telefonieren. Dank diesem eindeutigen primären Einsatzzweck haben sich Kopfhörer im Vergleich zu anderen Wearables, wie zum Beispiel Smartglasses, bereits stärker durchgesetzt. In den letzten Jahren hat sich eine neue Klasse von Wearables herausgebildet, die als "Earables" bezeichnet werden. Diese Geräte sind so konzipiert, dass sie in oder um die Ohren getragen werden können. Sie enthalten verschiedene Sensoren, um die Funktionalität von Kopfhörern zu erweitern. Die räumliche Nähe von Earables zu wichtigen anatomischen Strukturen des menschlichen Körpers bietet eine ausgezeichnete Plattform für die Erfassung einer Vielzahl von Eigenschaften, Prozessen und Aktivitäten. Auch wenn im Bereich der Earables-Forschung bereits einige Fortschritte erzielt wurden, wird deren Potenzial aktuell nicht vollständig abgeschöpft. Ziel dieser Dissertation ist es daher, neue Einblicke in die Möglichkeiten von Earables zu geben, indem fortschrittliche Sensorikansätze erforscht werden, welche die Erkennung von bisher unzugänglichen Phänomenen ermöglichen. Durch die Einführung von neuartiger Hardware und Algorithmik zielt diese Dissertation darauf ab, die Grenzen des Erreichbaren im Bereich Earables zu verschieben und diese letztlich als vielseitige Sensorplattform zur Erweiterung menschlicher Fähigkeiten zu etablieren. Um eine fundierte Grundlage für die Dissertation zu schaffen, synthetisiert die vorliegende Arbeit den Stand der Technik im Bereich der ohr-basierten Sensorik und stellt eine einzigartig umfassende Taxonomie auf der Basis von 271 relevanten Publikationen vor. Durch die Verbindung von Low-Level-Sensor-Prinzipien mit Higher-Level-Phänomenen werden in der Dissertation anschließ-end Arbeiten aus verschiedenen Bereichen zusammengefasst, darunter (i) physiologische Überwachung und Gesundheit, (ii) Bewegung und Aktivität, (iii) Interaktion und (iv) Authentifizierung und Identifizierung. Diese Dissertation baut auf der bestehenden Forschung im Bereich der physiologischen Überwachung und Gesundheit mit Hilfe von Earables auf und stellt fortschrittliche Algorithmen, statistische Auswertungen und empirische Studien vor, um die Machbarkeit der Messung der Atemfrequenz und der Erkennung von Episoden erhöhter Hustenfrequenz durch den Einsatz von In-Ear-Beschleunigungsmessern und Gyroskopen zu demonstrieren. Diese neuartigen Sensorfunktionen unterstreichen das Potenzial von Earables, einen gesünderen Lebensstil zu fördern und eine proaktive Gesundheitsversorgung zu ermöglichen. Darüber hinaus wird in dieser Dissertation ein innovativer Eye-Tracking-Ansatz namens "earEOG" vorgestellt, welcher Aktivitätserkennung erleichtern soll. Durch die systematische Auswertung von Elektrodenpotentialen, die um die Ohren herum mittels eines modifizierten Kopfhörers gemessen werden, eröffnet diese Dissertation einen neuen Weg zur Messung der Blickrichtung. Dabei ist das Verfahren weniger aufdringlich und komfortabler als bisherige Ansätze. Darüber hinaus wird ein Regressionsmodell eingeführt, um absolute Änderungen des Blickwinkels auf der Grundlage von earEOG vorherzusagen. Diese Entwicklung eröffnet neue Möglichkeiten für Forschung, welche sich nahtlos in das tägliche Leben integrieren lässt und tiefere Einblicke in das menschliche Verhalten ermöglicht. Weiterhin zeigt diese Arbeit, wie sich die einzigarte Bauform von Earables mit Sensorik kombinieren lässt, um neuartige Phänomene zu erkennen. Um die Interaktionsmöglichkeiten von Earables zu verbessern, wird in dieser Dissertation eine diskrete Eingabetechnik namens "EarRumble" vorgestellt, die auf der freiwilligen Kontrolle des Tensor Tympani Muskels im Mittelohr beruht. Die Dissertation bietet Einblicke in die Verbreitung, die Benutzerfreundlichkeit und den Komfort von EarRumble, zusammen mit praktischen Anwendungen in zwei realen Szenarien. Der EarRumble-Ansatz erweitert das Ohr von einem rein rezeptiven Organ zu einem Organ, das nicht nur Signale empfangen, sondern auch Ausgangssignale erzeugen kann. Im Wesentlichen wird das Ohr als zusätzliches interaktives Medium eingesetzt, welches eine freihändige und augenfreie Kommunikation zwischen Mensch und Maschine ermöglicht. EarRumble stellt eine Interaktionstechnik vor, die von den Nutzern als "magisch und fast telepathisch" beschrieben wird, und zeigt ein erhebliches ungenutztes Potenzial im Bereich der Earables auf. Aufbauend auf den vorhergehenden Ergebnissen der verschiedenen Anwendungsbereiche und Forschungserkenntnisse mündet die Dissertation in einer offenen Hard- und Software-Plattform für Earables namens "OpenEarable". OpenEarable umfasst eine Reihe fortschrittlicher Sensorfunktionen, die für verschiedene ohrbasierte Forschungsanwendungen geeignet sind, und ist gleichzeitig einfach herzustellen. Hierdurch werden die Einstiegshürden in die ohrbasierte Sensorforschung gesenkt und OpenEarable trägt somit dazu bei, das gesamte Potenzial von Earables auszuschöpfen. Darüber hinaus trägt die Dissertation grundlegenden Designrichtlinien und Referenzarchitekturen für Earables bei. Durch diese Forschung schließt die Dissertation die Lücke zwischen der Grundlagenforschung zu ohrbasierten Sensoren und deren praktischem Einsatz in realen Szenarien. Zusammenfassend liefert die Dissertation neue Nutzungsszenarien, Algorithmen, Hardware-Prototypen, statistische Auswertungen, empirische Studien und Designrichtlinien, um das Feld des Earable Computing voranzutreiben. Darüber hinaus erweitert diese Dissertation den traditionellen Anwendungsbereich von Kopfhörern, indem sie die auf Audio fokussierten Geräte zu einer Plattform erweitert, welche eine Vielzahl fortschrittlicher Sensorfähigkeiten bietet, um Eigenschaften, Prozesse und Aktivitäten zu erfassen. Diese Neuausrichtung ermöglicht es Earables sich als bedeutende Wearable Kategorie zu etablieren, und die Vision von Earables als eine vielseitige Sensorenplattform zur Erweiterung der menschlichen Fähigkeiten wird somit zunehmend realer

    A pervasive body sensor network for monitoring post-operative recovery

    Get PDF
    Over the past decade, miniaturisation and cost reduction brought about by the semiconductor industry has led to computers smaller in size than a pin head, powerful enough to carry out the processing required, and affordable enough to be disposable. Similar technological advances in wireless communication, sensor design, and energy storage have resulted in the development of wireless “Body Sensor Network (BSN) platforms comprising of tiny integrated micro sensors with onboard processing and wireless data transfer capability, offering the prospect of pervasive and continuous home health monitoring. In surgery, the reduced trauma of minimally invasive interventions combined with initiatives to reduce length of hospital stay and a socioeconomic drive to reduce hospitalisation costs, have all resulted in a trend towards earlier discharge from hospital. There is now a real need for objective, pervasive, and continuous post-operative home recovery monitoring systems. Surgical recovery is a multi-faceted and dynamic process involving biological, physiological, functional, and psychological components. Functional recovery (physical independence, activities of daily living, and mobility) is recognised as a good global indicator of a patient’s post-operative course, but has traditionally been difficult to objectively quantify. This thesis outlines the development of a pervasive wireless BSN system to objectively monitor the functional recovery of post-operative patients at home. Biomechanical markers were identified as surrogate measures for activities of daily living and mobility impairment, and an ear-worn activity recognition (e-AR) sensor containing a three-axis accelerometer and a pulse oximeter was used to collect this data. A simulated home environment was created to test a Bayesian classifier framework with multivariate Gaussians to model activity classes. A real-time activity index was used to provide information on the intensity of activity being performed. Mobility impairment was simulated with bracing systems and a multiresolution wavelet analysis and margin-based feature selection framework was used to detect impaired mobility. The e-AR sensor was tested in a home environment before its clinical use in monitoring post-operative home recovery of real patients who have undergone surgery. Such a system may eventually form part of an objective pervasive home recovery monitoring system tailored to the needs of today’s post-operative patient.Open acces

    Using Transfer Learning to Train Individualized Models to Detect Eating Episodes from Daily Wrist Motion

    Get PDF
    This thesis considers the problem of detecting periods of eating in free-living conditions by analyzing wrist motion data collected using sensors embedded within a typical smartwatch. Previous work by our research group included the collection of a dataset containing 354 days of recorded wrist motion data from 351 different people (approximately one day of data per person) [42]. A machine learning model was then trained to classify this wrist motion data as either eating or non-eating [40]. We refer to this model as the group model. Subsequent work in our research group collected approximately ten days of data each for eight new individuals and trained a model for each person solely using their own data [51]. We refer to these models as individual models. It was observed that, in most cases, the individual models outperformed the group model when evaluating the data of their corresponding individual, but at the cost of requiring each individual to collect two weeks of additional data. The novelty of this work is using transfer learning to leverage features learned within the group model and apply them to new individual models to further increase performance and possibly reduce the amount of individual data needed. Two datasets were used in this work. The first was the Clemson All Day (CAD) dataset, which contains 354 days of recorded wrist motion data from 351 different participants (approximately one day of data per participant). The CAD dataset includes a total of 4,680 hours of data, including 1,063 meals. The second dataset used was the Multiday dataset, which is comprised of at least ten days of free-living wrist motion data each for eight individuals. Both datasets were pre-processed using smoothing and normalization techniques. Training samples were then generated using a sliding window approach with a window size of six minutes. All group, individual, and transfer learning models evaluated in this work utilized an identical convolutional neural network (CNN) architecture. For a given window, the classifier generated a value that represented the probability of eating (P(E)) in the window. Entire days of wrist motion data were passed to the network to produce a continuous P(E) sequence for an entire day. This sequence was processed using a dual thresholding technique to locate predicted segments of eating within the recording. In our results, the transfer learning model achieved an eating episode true positive rate (TPR) of 81% with a false positive per true positive ratio (FP/TP) of 1.40. Compared to the individual model, this was a 6% decrease in episode TPR but a 43% improvement in FP/TP. The transfer learning model showed a time weighted accuracy (AccW) of 80%, which was only a 1% decrease relative to the individual model. After removing an outlier from the Multiday dataset and rerunning our experiments, the transfer learning model showed an episode TPR of 86% with an FP/TP of 1.34. Compared to the individual model, this was only a 3% decrease in TPR and a 46% improvement in FP/TP. By excluding the outlier, the transfer learning model also showed an 83% AccW, which was a 1% increase relative to the individual model. Furthermore, the transfer learning model was able to reduce training times by 12% compared to the individual model. In conclusion, we were able to find evidence that transfer learning could be utilized in order to improve individualized eating detection models by increasing weighted accuracy and decreasing false detections

    안경에서 기계적으로 증폭된 힘 측정을 통한 측두근 활동의 감지

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 기계항공공학부, 2017. 8. 이건우.Recently, the form of a pair of glasses is broadly utilized as a wearable device that provides the virtual and augmented reality in addition to its natural functionality as a visual aid. These approaches, however, have lacked the use of its inherent kinematic structure, which is composed of both the temple and the hinge. When we equip the glasses, the force is concentrated at the hinge, which connects the head piece and the temple, from the law of the lever. In addition, since the temple passes through a temporalis muscle, chewing and wink activity, anatomically activated by the contraction and relaxation of the temporalis muscle, can be detected from the mechanically amplified force measurement at the hinge. This study presents a new and effective method for automatic and objective measurement of the temporalis muscle activity through the natural-born lever mechanism of the glasses. From the implementation of the load cell-integrated wireless circuit module inserted into the both hinges of a 3D printed glasses frame, we developed the system that responds to the temporalis muscle activity persistently regardless of various form factor different from each person. This offers the potential to improve previous studies by avoiding the morphological, behavioral, and environmental constraints of using skin-attached, proximity, and sound sensors. In this study, we collected data featured as sedentary rest, chewing, walking, chewing while walking, talking and wink from 10-subject user study. The collected data were transferred to a series of 84-dimentional feature vectors, each of which was composed of the statistical features of both temporal and spectral domain. These feature vectors, then, were used to define a classifier model implemented by the support vector machine (SVM) algorithm. The model classified the featured activities (chewing, wink, and physical activity) as the average F1 score of 93.7%. This study provides a novel approach on the monitoring of ingestive behavior (MIB) in a non-intrusive and un-obtrusive manner. It supplies the possibility to apply the MIB into daily life by distinguishing the food intake from the other physical activities such as walking, talking, and wink with higher accuracy and wearability. Furthermore, through applying this approach to a sensor-integrated hair band, it can be potentially used for the medical monitoring of the sleep bruxism or temporomandibular dysfunction.Abstract Chapter 1. Introduction 1.1. Motivation 1.1.1. Law of the Lever 1.1.2. Lever Mechanism in Human Body 1.1.3. Mechanical Advantage in Auditory Ossicle 1.1.4. Mechanical Advantage in Glasses 1.2. Background 1.2.1. Biological Information from Temporalis Muscle 1.2.2. Detection of Temporalis Muscle Activity 1.2.3. Monitoring of Ingestive Behavior 1.3. Research Scope and Objectives Chapter 2. Proof-of-Concept Validation 2.1. Experimental Apparatus 2.2. Measurement Results 2.3. Discussion Chapter 3. Implementation of GlasSense 3.1. Hardware Prototyping 3.1.1. Preparation 3.1.2. Load Cell-Integrated Circuit Module 3.1.3. 3D Printed Frame of Glasses 3.1.4. Hardware Integration 3.2. Data Acquisition System 3.2.1. Wireless Data Transmission 3.2.2. Data Collecting Module Chapter 4. Data Collection through User Study 4.1. Preparation for Experiment 4.2. Activity Recording Chapter 5. Feature Extraction 5.1. Signal Preprocessing and Segmentation 5.1.1. Temporal Frame 5.1.2. Spectral Frame 5.2. Feature Extraction 5.2.1. Temporal Features 5.2.2. Spectral Features 5.2.3. Feature Vector Generation Chapter 6. Classification of Featured Activity 6.1. Support Vector Machine (SVM) 6.2. Design of Classifier Model 6.2.1. Grid-Search 6.2.2. Cross-Validation 6.3. Classification Result 6.4. Performance Improvement 6.5. Discussion Chapter 7. Conclusions Bibliography 초록Docto
    corecore