1,221 research outputs found

    Mixed reality participants in smart meeting rooms and smart home enviroments

    Get PDF
    Human–computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in a virtual meeting room, we discuss how remote meeting participants can take part in meeting activities and they have some observations on translating research results to smart home environments

    Towards Simulating Humans in Augmented Multi-party Interaction

    Get PDF
    Human-computer interaction requires modeling of the user. A user profile typically contains preferences, interests, characteristics, and interaction behavior. However, in its multimodal interaction with a smart environment the user displays characteristics that show how the user, not necessarily consciously, verbally and nonverbally provides the smart environment with useful input and feedback. Especially in ambient intelligence environments we encounter situations where the environment supports interaction between the environment, smart objects (e.g., mobile robots, smart furniture) and human participants in the environment. Therefore it is useful for the profile to contain a physical representation of the user obtained by multi-modal capturing techniques. We discuss the modeling and simulation of interacting participants in the European AMI research project

    Radar and RGB-depth sensors for fall detection: a review

    Get PDF
    This paper reviews recent works in the literature on the use of systems based on radar and RGB-Depth (RGB-D) sensors for fall detection, and discusses outstanding research challenges and trends related to this research field. Systems to detect reliably fall events and promptly alert carers and first responders have gained significant interest in the past few years in order to address the societal issue of an increasing number of elderly people living alone, with the associated risk of them falling and the consequences in terms of health treatments, reduced well-being, and costs. The interest in radar and RGB-D sensors is related to their capability to enable contactless and non-intrusive monitoring, which is an advantage for practical deployment and users’ acceptance and compliance, compared with other sensor technologies, such as video-cameras, or wearables. Furthermore, the possibility of combining and fusing information from The heterogeneous types of sensors is expected to improve the overall performance of practical fall detection systems. Researchers from different fields can benefit from multidisciplinary knowledge and awareness of the latest developments in radar and RGB-D sensors that this paper is discussing

    Envisioning Value-Rich Design for IoT Wearables

    Get PDF
    The mass-market fashion industry maintains complex economic structures globally. In recent years, the adverse consequences of commercialisation driven by this system have given rise to innovation in production systems, material cultures, and consumer awareness of waste. Alongside issues of long-term lifespan and ecological impact of wearables (wearable technology), focus on the values and thought processes that shape practices within the clothing sector are under-represented. The integration of emerging wireless technologies in garments heightens this problem. The potential to access, collectively experience, wear, monitor or exploit personal data is only just beginning to be understood. In this paper, the author explores the role value-sensitive design [7] plays to further embed sustainability into wearables ideation. From value-sensitive design, the Envisioning Cards toolkit [5] is employed to guide speculation in the design case of Aura:maton, an Internet of Things (IoT) connected garment with an olfactory-emitting display. With this in mind, the 'social, economic and aesthetic force' [3] of fashion is leveraged as a living network metaphor, in order to frame everyday experiences of an IoT ecosystem. Exploratory workshops trace how people perceive value-tensions of wirelessly networked garments. The author's evaluations show the potential of Envisioning Cards to connect the broader social, cultural, economic or political issues as conceptual design tactics, to avoid blind spots. This paper discusses how designers could intentionally explore value dimensions alongside the technologically possible, as they negotiate material-immaterial conditions during fashion wearables development. Interweaving values into decisions of what gets made, or not made can potentially shift the unfolding of design toward value-rich, IoT connected garments

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things

    Co-creation Model to Design Wearables for Emotional Wellness of Elderly

    Get PDF
    Ways to influence emotions have always been an area of interest within the scientific community. The objective of this research is to find the role of technology in order to improve emotional wellness for the elderly population. We conducted a qualitative and quantitative study with the help of interviews and a survey. A sample of 24 respondents is selected randomly from the elderly population. The results showed a strong correlation between emotional, psychological and social wellness dimensions and elders comfort with the use of technology. Based on our study, we present a co-creation model to design wearables for monitoring and improving emotional wellness for elderly. There is a need for focused efforts to develop digital interventions for emotional wellness for elderly. It is important to include elders as co-designers to form effective solutions for elderly through a co-creation process

    Reference Resolution in Multi-modal Interaction: Position paper

    Get PDF
    In this position paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems and how they are treated for different contexts. No generally applicable solutions are given

    Reference resolution in multi-modal interaction: Preliminary observations

    Get PDF
    In this paper we present our research on multimodal interaction in and with virtual environments. The aim of this presentation is to emphasize the necessity to spend more research on reference resolution in multimodal contexts. In multi-modal interaction the human conversational partner can apply more than one modality in conveying his or her message to the environment in which a computer detects and interprets signals from different modalities. We show some naturally arising problems but do not give general solutions. Rather we decide to perform more detailed research on reference resolution in uni-modal contexts to obtain methods generalizable to multi-modal contexts. Since we try to build applications for a Dutch audience and since hardly any research has been done on reference resolution for Dutch, we give results on the resolution of anaphoric and deictic references in Dutch texts. We hope to be able to extend these results to our multimodal contexts later

    MeciFace: Mechanomyography and Inertial Fusion based Glasses for Edge Real-Time Recognition of Facial and Eating Activities

    Full text link
    The increasing prevalence of stress-related eating behaviors and their impact on overall health highlights the importance of effective monitoring systems. In this paper, we present MeciFace, an innovative wearable technology designed to monitor facial expressions and eating activities in real-time on-the-edge (RTE). MeciFace aims to provide a low-power, privacy-conscious, and highly accurate tool for promoting healthy eating behaviors and stress management. We employ lightweight convolutional neural networks as backbone models for facial expression and eating monitoring scenarios. The MeciFace system ensures efficient data processing with a tiny memory footprint, ranging from 11KB to 19KB. During RTE evaluation, the system achieves impressive performance, yielding an F1-score of < 86% for facial expression recognition and 90% for eating/drinking monitoring, even for the RTE of an unseen user.Comment: Submitted to Nature Scientific Report
    • 

    corecore