27,606 research outputs found

    ANGELAH: A Framework for Assisting Elders At Home

    Get PDF
    The ever growing percentage of elderly people within modern societies poses welfare systems under relevant stress. In fact, partial and progressive loss of motor, sensorial, and/or cognitive skills renders elders unable to live autonomously, eventually leading to their hospitalization. This results in both relevant emotional and economic costs. Ubiquitous computing technologies can offer interesting opportunities for in-house safety and autonomy. However, existing systems partially address in-house safety requirements and typically focus on only elder monitoring and emergency detection. The paper presents ANGELAH, a middleware-level solution integrating both ”elder monitoring and emergency detection” solutions and networking solutions. ANGELAH has two main features: i) it enables efficient integration between a variety of sensors and actuators deployed at home for emergency detection and ii) provides a solid framework for creating and managing rescue teams composed of individuals willing to promptly assist elders in case of emergency situations. A prototype of ANGELAH, designed for a case study for helping elders with vision impairments, is developed and interesting results are obtained from both computer simulations and a real-network testbed

    An Advanced Home ElderCare Service

    Get PDF
    With the increase of welfare cost all over the developed world, there is a need to resort to new technologies that could help reduce this enormous cost and provide some quality eldercare services. This paper presents a middleware-level solution that integrates monitoring and emergency detection solutions with networking solutions. The proposed system enables efficient integration between a variety of sensors and actuators deployed at home for emergency detection and provides a framework for creating and managing rescue teams willing to assist elders in case of emergency situations. A prototype of the proposed system was designed and implemented. Results were obtained from both computer simulations and a real-network testbed. These results show that the proposed system can help overcome some of the current problems and help reduce the enormous cost of eldercare service

    Use of nonintrusive sensor-based information and communication technology for real-world evidence for clinical trials in dementia

    Get PDF
    Cognitive function is an important end point of treatments in dementia clinical trials. Measuring cognitive function by standardized tests, however, is biased toward highly constrained environments (such as hospitals) in selected samples. Patient-powered real-world evidence using information and communication technology devices, including environmental and wearable sensors, may help to overcome these limitations. This position paper describes current and novel information and communication technology devices and algorithms to monitor behavior and function in people with prodromal and manifest stages of dementia continuously, and discusses clinical, technological, ethical, regulatory, and user-centered requirements for collecting real-world evidence in future randomized controlled trials. Challenges of data safety, quality, and privacy and regulatory requirements need to be addressed by future smart sensor technologies. When these requirements are satisfied, these technologies will provide access to truly user relevant outcomes and broader cohorts of participants than currently sampled in clinical trials

    Canine-centered interface design: supporting the work of diabetes alert dogs

    Get PDF
    Many people with Diabetes live with the continuous threat of hypoglycaemic attacks and the danger of going into coma. Diabetic Alert Dogs are trained to detect the onset of an attack before the human handler they are paired with deteriorates, giving them time to take action. We investigated requirements for designing an alert system allowing dogs to remotely call for help when their human falls unconscious before being able to react to an alert. Through a multispecies ethnographic approach we focus on teasing out the requirements for a physical canine user interface, involving both dogs, their handlers and trainers in the design. We discuss tensions between the requirements for the canine and the human users, argue the need for increased sensitivity towards the needs of individual dogs that goes beyond breed specific physical characteristics and reflect on how we can move from designing for dogs to designing with dogs

    The Evolution of First Person Vision Methods: A Survey

    Full text link
    The emergence of new wearable technologies such as action cameras and smart-glasses has increased the interest of computer vision scientists in the First Person perspective. Nowadays, this field is attracting attention and investments of companies aiming to develop commercial devices with First Person Vision recording capabilities. Due to this interest, an increasing demand of methods to process these videos, possibly in real-time, is expected. Current approaches present a particular combinations of different image features and quantitative methods to accomplish specific objectives like object detection, activity recognition, user machine interaction and so on. This paper summarizes the evolution of the state of the art in First Person Vision video analysis between 1997 and 2014, highlighting, among others, most commonly used features, methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart Glasses, Computer Vision, Video Analytics, Human-machine Interactio

    Eyewear Computing \u2013 Augmenting the Human with Head-Mounted Wearable Assistants

    Get PDF
    The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled \u201cEyewear Computers for Human-Computer Interaction\u201d, as well as 5) two follow-up workshops on \u201cEgocentric Perception, Interaction, and Computing\u201d at the European Conference on Computer Vision (ECCV) as well as \u201cEyewear Computing\u201d at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp)
    • 

    corecore