140 research outputs found

    Monitoring Quality of Life Indicators at Home from Sparse and Low-Cost Sensor Data.

    Get PDF
    Supporting older people, many of whom live with chronic conditions or cognitive and physical impairments, to live independently at home is of increasing importance due to ageing demographics. To aid independent living at home, much effort is being directed at reliably detecting activities from sensor data to monitor people’s quality of life or to enhance self-management of their own health. Current efforts typically leverage smart homes which have large numbers of sensors installed to overcome challenges in the accurate detection of activities. In this work, we report on the results of machine learning models based on data collected with a small number of low-cost, off-the-shelf passive sensors that were retrofitted in real homes, some with more than a single occupant. Models were developed from the sensor data collected to recognize activities of daily living, such as eating and dressing as well as meaningful activities, such as reading a book and socializing. We evaluated five algorithms and found that a Recurrent Neural Network was most accurate in recognizing activities. However, many activities remain difficult to detect, in particular meaningful activities, which are characterized by high levels of individual personalization. Our work contributes to applying smart healthcare technology in real-world home settings

    User Characteristics in Explainable AI: The Rabbit Hole of Personalization?

    Full text link
    As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.Comment: 20 pages, 4 tables, 2 figure

    Monitoring meaningful activities using small low-cost devices in a smart home

    Get PDF
    A challenge associated with an ageing population is increased demand on health and social care, creating a greater need to enable persons to live independently in their own homes. Ambient assistant living technology aims to address this by monitoring occupants’ ‘activities of daily living’ using smart home sensors to alert caregivers to abnormalities in routine tasks and deteriorations in a person’s ability to care for themselves. However, there has been less focus on using sensing technology to monitor a broader scope of so-called ‘meaningful activities’, which promote a person’s emotional, creative, intellectual, and spiritual needs. In this paper, we describe the development of a toolkit comprised of off-the-shelf, affordable sensors to allow persons with dementia and Parkinson’s disease to monitor meaningful activities as well as activities of daily living in order to self-manage their life and well-being. We describe two evaluations of the toolkit, firstly a lab-based study to test the installation of the system including the acuity and placement of sensors and secondly, an in-the-wild study where subjects who were not target users of the toolkit, but who identified as technology enthusiasts evaluated the feasibility of the toolkit to monitor activities in and around real homes. Subjects from the in-the-wild study reported minimal obstructions to installation and were able to carry out and enjoy activities without obstruction from the sensors, revealing that meaningful activities may be monitored remotely using affordable, passive sensors. We propose that our toolkit may enhance assistive living systems by monitoring a wider range of activities than activities of daily living

    Exploring the Impact of Lay User Feedback for Improving AI Fairness

    Full text link
    Fairness in AI is a growing concern for high-stakes decision making. Engaging stakeholders, especially lay users, in fair AI development is promising yet overlooked. Recent efforts explore enabling lay users to provide AI fairness-related feedback, but there is still a lack of understanding of how to integrate users' feedback into an AI model and the impacts of doing so. To bridge this gap, we collected feedback from 58 lay users on the fairness of a XGBoost model trained on the Home Credit dataset, and conducted offline experiments to investigate the effects of retraining models on accuracy, and individual and group fairness. Our work contributes baseline results of integrating user fairness feedback in XGBoost, and a dataset and code framework to bootstrap research in engaging stakeholders in AI fairness. Our discussion highlights the challenges of employing user feedback in AI fairness and points the way to a future application area of interactive machine learning

    Investigating privacy perceptions and subjective acceptance of eye tracking on handheld mobile devices

    Get PDF
    Although eye tracking brings many benefits to users of mobile devices and developers of mobile applications, it poses significant privacy risks to both: the users of mobile devices, and the bystanders that surround users, are within the front-facing camera's field of view. Recent research demonstrates that tracking an individual's gaze reveals personal and sensitive information. This paper presents an investigation of the privacy perceptions and the subjective acceptance of users towards eye tracking on handheld mobile devices. In a four-phase user study (N=17), participants used a smartphone eye tracking app, were interviewed before and after viewing a video showing the amount of sensitive and personal data that could be derived from eye movements, and had their privacy concerns measured. Our findings 1) show factors that influence users' and bystanders' attitudes toward eye tracking on mobile devices such as the algorithms' transparency and the developers' credibility and 2) support designing mechanisms to allow for privacy-aware eye tracking solutions on mobile-devices

    People with long-term conditions sharing personal health data via digital health technologies:a scoping review to inform design

    Get PDF
    The use of digital technology amongst people living with a range of long-term health conditions to support self-management has increased dramatically. More recently, digital health technologies to share and exchange personal health data with others have been investigated. Sharing personal health data with others is not without its risks: sharing data creates threats to the privacy and security of personal data and plays a role in trust, adoption and continued use of digital health technology. Our work aims to inform the design of these digital health technologies by investigating the reported intentions of sharing health data with others, the associated user experiences when using these digital health technologies and the trust, identity, privacy and security (TIPS) considerations for designing digital health technologies that support the trusted sharing of personal health data to support the self-management of long-term health conditions. To address these aims, we conducted a scoping review, analysing over 12,000 papers in the area of digital health technologies. We conducted a reflexive thematic analysis of 17 papers that described digital health technologies that support sharing of personal health data, and extracted design implications that could enhance the future development of trusted, private and secure digital health technologies.</p

    The Need for User-centred Assessment of AI Fairness and Correctness

    Get PDF
    AI needs to be fair and robust, especially to meet demands of new regulation. Regular assessments are key but it is unclear how we can involve stakeholders without a background in AI in these efforts. This position paper provides an overview of the problems in this area, discusses the current work and looks ahead to future research needed to make headway in user-centric assessment of AI
    • …
    corecore