82 research outputs found

    Exploring the State-of-Receptivity for mHealth Interventions

    Get PDF
    Recent advancements in sensing techniques for mHealth applications have led to successful development and deployments of several mHealth intervention designs, including Just-In-Time Adaptive Interventions (JITAI). JITAIs show great potential because they aim to provide the right type and amount of support, at the right time. Timing the delivery of a JITAI such as the user is receptive and available to engage with the intervention is crucial for a JITAI to succeed. Although previous research has extensively explored the role of context in users’ responsiveness towards generic phone notiications, it has not been thoroughly explored for actual mHealth interventions. In this work, we explore the factors afecting users’ receptivity towards JITAIs. To this end, we conducted a study with 189 participants, over a period of 6 weeks, where participants received interventions to improve their physical activity levels. The interventions were delivered by a chatbot-based digital coach ś Ally ś which was available on Android and iOS platforms. We deine several metrics to gauge receptivity towards the interventions, and found that (1) several participant-speciic characteristics (age, personality, and device type) show signiicant associations with the overall participant receptivity over the course of the study, and that (2) several contextual factors (day/time, phone battery, phone interaction, physical activity, and location), show signiicant associations with the participant receptivity, in-the-moment. Further, we explore the relationship between the efectiveness of the intervention and receptivity towards those interventions; based on our analyses, we speculate that being receptive to interventions helped participants achieve physical activity goals, which in turn motivated participants to be more receptive to future interventions. Finally, we build machine-learning models to detect receptivity, with up to a 77% increase in F1 score over a biased random classiier

    Towards Active Learning Interfaces for Multi-Inhabitant Activity Recognition

    Get PDF
    Semi-supervised approaches for activity recognition are a promising way to address the labeled data scarcity problem. Those methods only require a small training set in order to be initialized, and the model is continuously updated and improved over time. Among the several solutions existing in the literature, active learning is emerging as an effective technique to significantly boost the recognition rate: when the model is uncertain about the current activity performed by the user, the system asks her to provide the ground truth. This feedback is then used to update the recognition model. While active learning has been mostly proposed in single-inhabitant settings, several questions arise when such a system has to be implemented in a realistic environment with multiple users. Who to ask a feedback when the system is uncertain about a collaborative activity? In this paper, we investigate this and more questions on this topic, proposing a preliminary study of the requirements of an active learning interface for multi-inhabitant settings. In particular, we formalize the problem and we describe the solutions adopted in our system prototype

    Context-aware mobile computing: learning context- dependent personal preferences from a wearable sensor array

    Full text link

    The State of Algorithmic Fairness in Mobile Human-Computer Interaction

    Full text link
    This paper explores the intersection of Artificial Intelligence and Machine Learning (AI/ML) fairness and mobile human-computer interaction (MobileHCI). Through a comprehensive analysis of MobileHCI proceedings published between 2017 and 2022, we first aim to understand the current state of algorithmic fairness in the community. By manually analyzing 90 papers, we found that only a small portion (5%) thereof adheres to modern fairness reporting, such as analyses conditioned on demographic breakdowns. At the same time, the overwhelming majority draws its findings from highly-educated, employed, and Western populations. We situate these findings within recent efforts to capture the current state of algorithmic fairness in mobile and wearable computing, and envision that our results will serve as an open invitation to the design and development of fairer ubiquitous technologies.Comment: arXiv admin note: text overlap with arXiv:2303.1558

    Sensing and indicating interruptibility in office workplaces

    Full text link
    In office workplaces, interruptions by co-workers, emails or instant messages are common. Many of these interruptions are useful as they might help resolve questions quickly and increase the productivity of the team. However, knowledge workers interrupted at inopportune moments experience longer task resumption times, lower overall performance, more negative emotions, and make more errors than if they were to be interrupted at more appropriate moments. To reduce the cost of interruptions, several approaches have been suggested, ranging from simply closing office doors to automatically measuring and indicating a knowledge worker’s interruptibility - the availability for interruptions - to co-workers. When it comes to computer-based interruptions, such as emails and instant messages, several studies have shown that they can be deferred to automatically detected breakpoints during task execution, which reduces their interruption cost. For in-person interruptions, one of the most disruptive and time-consuming types of interruptions in office workplaces, the predominant approaches are still manual strategies to physically indicate interruptibility, such as wearing headphones or using manual busy lights. However, manual approaches are cumbersome to maintain and thus are not updated regularly, which reduces their usefulness. To automate the measurement and indication of interruptibility, researchers have looked at a variety of data that can be leveraged, ranging from contextual data, such as audio and video streams, keyboard and mouse interaction data, or task characteristics all the way to biometric data, such as heart rate data or eye traces. While studies have shown promise for the use of such sensors, they were predominantly conducted on small and controlled tasks over short periods of time and mostly limited to either contextual or biometric sensors. Little is known about their accuracy and applicability for long-term usage in the field, in particular in office workplaces. In this work, we developed an approach to automatically measure interruptibility in office workplaces, using computer interaction sensors, which is one type of contextual sensors, and biometric sensors. In particular, we conducted one lab and two field studies with a total of 33 software developers. Using the collected computer interaction and biometric data, we used machine learning to train interruptibility models. Overall, the results of our studies show that we can automatically predict interruptibility with high accuracy of 75.3%, improving on a baseline majority classifier by 26.6%. An automatic measure of interruptibility can consequently be used to indicate the status to others, allowing them to make a well-informed decision on when to interrupt. While there are some automatic approaches to indicate interruptibility on a computer in the form of contact list applications, they do not help to reduce in-person interruptions. Only very few researchers combined the benefits of an automatic measurement with a physical indicator, but their effect in office workplaces over longer periods of time is unknown. In our research, we developed the FlowLight, an automatic interruptibility indicator in the form of a traffic-light like LED placed on a knowledge worker's desk. We evaluated the FlowLight in a large-scale field study with 449 participants from 12 countries. The evaluation revealed that after the introduction of the FlowLight, the number of in-person interruptions decreased by 46% (based on 36 interruption logs), the awareness on the potential harm of interruptions was elevated and participants felt more productive (based on 183 survey responses and 23 interview transcripts), and 86% remained active users even after the two-month study period ended (based on 449 online usage logs). Overall, our research shows that we can successfully reduce in-person interruption cost in office workplaces by sensing and indicating interruptibility. In addition, our research can be extended and opens up new opportunities to further support interruption management, for example, by the integration of other more accurate biometric sensors to improve the interruptibility model, or the use of the model to reduce self-interruptions

    XAIR: A Framework of Explainable AI in Augmented Reality

    Full text link
    Explainable AI (XAI) has established itself as an important component of AI-driven interactive systems. With Augmented Reality (AR) becoming more integrated in daily lives, the role of XAI also becomes essential in AR because end-users will frequently interact with intelligent services. However, it is unclear how to design effective XAI experiences for AR. We propose XAIR, a design framework that addresses "when", "what", and "how" to provide explanations of AI output in AR. The framework was based on a multi-disciplinary literature review of XAI and HCI research, a large-scale survey probing 500+ end-users' preferences for AR-based explanations, and three workshops with 12 experts collecting their insights about XAI design in AR. XAIR's utility and effectiveness was verified via a study with 10 designers and another study with 12 end-users. XAIR can provide guidelines for designers, inspiring them to identify new design opportunities and achieve effective XAI designs in AR.Comment: Proceedings of the 2023 CHI Conference on Human Factors in Computing System

    Exploring smartphone keyboard interactions for Experience Sampling Method driven probe generation

    Get PDF
    Keyboard interaction patterns on a smartphone is the input for many intelligent emotion-aware applications, such as adaptive interface, optimized keyboard layout, automatic emoji recommendation in IM applications. The simplest approach, called the Experience Sampling Method (ESM), is to systematically gather self-reported emotion labels from users, which act as the ground truth labels, and build a supervised prediction model for emotion inference. However, as manual self-reporting is fatigue-inducing and attention-demanding, the self-report requests are to be scheduled at favorable moments to ensure high fidelity response. We, in this paper, perform fine-grain keyboard interaction analysis to determine suitable probing moments. Keyboard interaction patterns, both cadence, and latency between strokes, nicely translate to frequency and time domain analysis of the patterns. In this paper, we perform a 3-week in-the-wild study (N = 22) to log keyboard interaction patterns and self-report details indicating (in)opportune probing moments. Analysis of the dataset reveals that time-domain features (e.g., session length, session duration) and frequency-domain features (e.g., number of peak amplitudes, value of peak amplitude) vary significantly between opportune and inopportune probing moments. Driven by these analyses, we develop a generalized (all-user) Random Forest based model, which can identify the opportune probing moments with an average F-score of 93%. We also carry out the explainability analysis of the model using SHAP (SHapley Additive exPlanations), which reveals that the session length and peak amplitude have strongest influence to determine the probing moments
    • …
    corecore