34 research outputs found

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfĂŒgbare und leicht zugĂ€ngliche FlĂ€che fĂŒr Interaktion. JĂŒngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer GerĂ€te gefĂŒhrt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten EpidermisgerĂ€te haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dĂŒnn, oft dĂŒnner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles VerstĂ€ndnis von EpidermisgerĂ€ten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen AnsĂ€tzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinĂ€re Analyse von EpidermisgerĂ€ten die Designziele und Herausforderungen, die angegangen werden mĂŒssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale GerĂ€te unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des GerĂ€ts und den taktilen Empfindlichkeitsschwellen sowie der FĂ€higkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen fĂŒr die Realisierung epidermaler GerĂ€te ab. Zweitens trĂ€gt diese Thesis zu neuartigen EpidermisgerĂ€ten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trĂ€gt zur Herstellung und zum Design neuartiger EpidermisgerĂ€te bei, die hochgradig an die Haut angepasst sind und BerĂŒhrungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese GerĂ€te können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum fĂŒr die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir prĂ€sentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und GrĂ¶ĂŸen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestĂ€tigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-VerhĂ€ltnis erreicht und eine hohe rĂ€umliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da EpidermisgerĂ€te in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestĂŒtzten Design von multimodalen EpidermisgerĂ€ten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten GerĂ€te können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale AktivitĂ€t) messen. DarĂŒber hinaus stellen wir eine computergestĂŒtzte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte GerĂ€tedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der FĂ€higkeit zur Erfassung physiologischer Signale und der GrĂ¶ĂŸe des GerĂ€ts. Das grafische Tool ermöglicht die einfache Festlegung von DesignprĂ€ferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinĂ€ren Perspektive einen Fahrplan fĂŒr zukĂŒnftige Forschung in diesem Bereich, indem wir die nĂ€chsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trĂ€gt diese Arbeit zu einem ganzheitlichen VerstĂ€ndnis von EpidermisgerĂ€ten bei: Sie liefert ein empirisches und konzeptionelles VerstĂ€ndnis sowie technische Einblicke durch BeitrĂ€ge zu DIY (Do-It-Yourself), schneller Fertigung und computergestĂŒtzten Entwurfstechniken

    Developing Transferable Deep Models for Mobile Health

    Get PDF
    Human behavior is one of the key facets of health. A major portion of healthcare spending in the US is attributed to chronic diseases, which are linked to behavioral risk factors such as smoking, drinking, unhealthy eating. Mobile devices that are integrated into people's everyday lives make it possible for us to get a closer look into behavior. Two of the most commonly used sensing modalities include Ecological Momentary Assessments (EMAs): surveys about mental states, environment, and other factors, and wearable sensors that are used to capture high frequency contextual and physiological signals. One of the main visions of mobile health (mHealth) is sensor-based behavior modification. Contextual data collected from participants is typically used to train a risk prediction model for adverse events such as smoking, which can then be used to inform intervention design. However, there are several choices in an mHealth study such as the demographics of the participants in the study, the type of sensors used, the questions included in the EMA. This results in two technical challenges to using machine learning models effectively across mHealth studies. The first is the problem of domain shift where the data distribution varies across studies. This would result in models trained on one study to have sub-optimal performance on a different study. Domain shift is common in wearable sensor data since there are several sources of variability such as sensor design, the placement of the sensor on the body, demographics of the users, etc. The second challenge is that of covariate-space shift where the input-space changes across datasets. This is common across EMA datasets since questions can vary based on the study. This thesis studies the problem of covariate-space shift and domain shift in mHealth data. First, I study the problem of domain shift caused by differences in the sensor type and placement in ECG and PPG signals. I propose a self-supervised learning based domain adaptation method that captures the physiological structure of these signals to improve transfer performance of predictive models. Second, I present a method to find a common input representation irrespective of the fine-grained questions in EMA datasets to overcome the problem of covariate-space shift. The next challenge to the deployment of ML models in health is explainability. I explore the problem of bridging the gap between explainability methods and domain experts and present a method to generate plausible, relevant, and convincing explanations.Ph.D

    Scalable Teaching and Learning via Intelligent User Interfaces

    Get PDF
    The increasing demand for higher education and the educational budget cuts lead to large class sizes. Learning at scale is also the norm in Massive Open Online Courses (MOOCs). While it seems cost-effective, the massive scale of class challenges the adoption of proven pedagogical approaches and practices that work well in small classes, especially those that emphasize interactivity, active learning, and personalized learning. As a result, the standard teaching approach in today’s large classes is still lectured-based and teacher-centric, with limited active learning activities, and with relatively low teaching and learning effectiveness. This dissertation explores the usage of Intelligent User Interfaces (IUIs) to facilitate the efficient and effective adoption of the tried-and-true pedagogies at scale. The first system is MindMiner, an instructor-side data exploration and visualization system for peer review understanding. MindMiner helps instructors externalize and quantify their subjective domain knowledge, interactively make sense of student peer review data, and improve data exploration efficiency via distance metric learning. MindMiner also helps instructors generate customized feedback to students at scale. We then present BayesHeart, a probabilistic approach for implicit heart rate monitoring on smartphones. When integrated with MOOC mobile clients, BayesHeart can capture learners’ heart rates implicitly when they watch videos. Such information is the foundation of learner attention/affect modeling, which enables a ‘sensorless’ and scalable feedback channel from students to instructors. We then present CourseMIRROR, an intelligent mobile system integrated with Natural Language Processing (NLP) techniques that enables scalable reflection prompts in large classrooms. CourseMIRROR 1) automatically reminds and collects students’ in-situ written reflections after each lecture; 2) continuously monitors the quality of a student’s reflection at composition time and generates helpful feedback to scaffold reflection writing; 3) summarizes the reflections and presents the most significant ones to both instructors and students. Last, we present ToneWars, an educational game connecting Chinese as a Second Language (CSL) learners with native speakers via collaborative mobile gameplay. We present a scalable approach to enable authentic competition and skill comparison with native speakers by modeling their interaction patterns and language skills asynchronously. We also prove the effectiveness of such modeling in a longitudinal study

    Automatic inference of latent emotion from spontaneous facial micro-expressions

    Get PDF
    Emotional states exert a profound influence on individuals' overall well-being, impacting them both physically and psychologically. Accurate recognition and comprehension of human emotions represent a crucial area of scientific exploration. Facial expressions, vocal cues, body language, and physiological responses provide valuable insights into an individual's emotional state, with facial expressions being universally recognised as dependable indicators of emotions. This thesis centres around three vital research aspects concerning the automated inference of latent emotions from spontaneous facial micro-expressions, seeking to enhance and refine our understanding of this complex domain. Firstly, the research aims to detect and analyse activated Action Units (AUs) during the occurrence of micro-expressions. AUs correspond to facial muscle movements. Although previous studies have established links between AUs and conventional facial expressions, no such connections have been explored for micro-expressions. Therefore, this thesis develops computer vision techniques to automatically detect activated AUs in micro-expressions, bridging a gap in existing studies. Secondly, the study explores the evolution of micro-expression recognition techniques, ranging from early handcrafted feature-based approaches to modern deep-learning methods. These approaches have significantly contributed to the field of automatic emotion recognition. However, existing methods primarily focus on capturing local spatial relationships, neglecting global relationships between different facial regions. To address this limitation, a novel third-generation architecture is proposed. This architecture can concurrently capture both short and long-range spatiotemporal relationships in micro-expression data, aiming to enhance the accuracy of automatic emotion recognition and improve our understanding of micro-expressions. Lastly, the thesis investigates the integration of multimodal signals to enhance emotion recognition accuracy. Depth information complements conventional RGB data by providing enhanced spatial features for analysis, while the integration of physiological signals with facial micro-expressions improves emotion discrimination. By incorporating multimodal data, the objective is to enhance machines' understanding of latent emotions and improve latent emotion recognition accuracy in spontaneous micro-expression analysis
    corecore