3,744 research outputs found

    User recognition in AAL environments

    Get PDF
    Springer - Series Advances in Intelligent and Soft Computing, vol. 72Healthcare projects that intend to decrease the economical and social costs of the real ageing population phenomenon, through the de-localisation of healthcare services delivery and management to the home, have been arising in the scientific community. The VirtualECare project is one of those, so called, Ambient Assisted Living environments, which we have taken a step forward with the introduction of proactive techniques for better adapting to its users, namely elderly or chronic patients, once it is able to learn with their interaction based in contexts. This learning, however, causes the system need to know with whom it is interacting. Basic detection techniques based in possible devices that users carries along with them (e.g. RFID tags, mobile phones, ...) are not good enough, since they can lose/forgot/switch them. To obtain the expected results the technology used has to be more advanced and available in several platforms. One possible and already fairly developed technique is Facial Recognition, and it appears to be the most appropriate one to handle the problem.This document exposes the initial approach of the VirtualECare project to the Facial Recognition area

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Human Action Recognition and Monitoring in Ambient Assisted Living Environments

    Get PDF
    Population ageing is set to become one of the most significant challenges of the 21st century, with implications for almost all sectors of society. Especially in developed countries, governments should immediately implement policies and solutions to facilitate the needs of an increasingly older population. Ambient Intelligence (AmI) and in particular the area of Ambient Assisted Living (AAL) offer a feasible response, allowing the creation of human-centric smart environments that are sensitive and responsive to the needs and behaviours of the user. In such a scenario, understand what a human being is doing, if and how he/she is interacting with specific objects, or whether abnormal situations are occurring is critical. This thesis is focused on two related research areas of AAL: the development of innovative vision-based techniques for human action recognition and the remote monitoring of users behaviour in smart environments. The former topic is addressed through different approaches based on data extracted from RGB-D sensors. A first algorithm exploiting skeleton joints orientations is proposed. This approach is extended through a multi-modal strategy that includes the RGB channel to define a number of temporal images, capable of describing the time evolution of actions. Finally, the concept of template co-updating concerning action recognition is introduced. Indeed, exploiting different data categories (e.g., skeleton and RGB information) improve the effectiveness of template updating through co-updating techniques. The action recognition algorithms have been evaluated on CAD-60 and CAD-120, achieving results comparable with the state-of-the-art. Moreover, due to the lack of datasets including skeleton joints orientations, a new benchmark named Office Activity Dataset has been internally acquired and released. Regarding the second topic addressed, the goal is to provide a detailed implementation strategy concerning a generic Internet of Things monitoring platform that could be used for checking users' behaviour in AmI/AAL contexts

    Robotic ubiquitous cognitive ecology for smart homes

    Get PDF
    Robotic ecologies are networks of heterogeneous robotic devices pervasively embedded in everyday environments, where they cooperate to perform complex tasks. While their potential makes them increasingly popular, one fundamental problem is how to make them both autonomous and adaptive, so as to reduce the amount of preparation, pre-programming and human supervision that they require in real world applications. The project RUBICON develops learning solutions which yield cheaper, adaptive and efficient coordination of robotic ecologies. The approach we pursue builds upon a unique combination of methods from cognitive robotics, machine learning, planning and agent- based control, and wireless sensor networks. This paper illustrates the innovations advanced by RUBICON in each of these fronts before describing how the resulting techniques have been integrated and applied to a smart home scenario. The resulting system is able to provide useful services and pro-actively assist the users in their activities. RUBICON learns through an incremental and progressive approach driven by the feed- back received from its own activities and from the user, while also self-organizing the manner in which it uses available sensors, actuators and other functional components in the process. This paper summarises some of the lessons learned by adopting such an approach and outlines promising directions for future work

    Probability and Common-Sense: Tandem Towards Robust Robotic Object Recognition in Ambient Assisted Living

    Get PDF
    The suitable operation of mobile robots when providing Ambient Assisted Living (AAL) services calls for robust object recognition capabilities. Probabilistic Graphical Models (PGMs) have become the de-facto choice in recognition systems aiming to e ciently exploit contextual relations among objects, also dealing with the uncertainty inherent to the robot workspace. However, these models can perform in an inco herent way when operating in a long-term fashion out of the laboratory, e.g. while recognizing objects in peculiar con gurations or belonging to new types. In this work we propose a recognition system that resorts to PGMs and common-sense knowledge, represented in the form of an ontology, to detect those inconsistencies and learn from them. The utilization of the ontology carries additional advantages, e.g. the possibility to verbalize the robot's knowledge. A primary demonstration of the system capabilities has been carried out with very promising results.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    Interoperable services based on activity monitoring in ambient assisted living environments

    Get PDF
    Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring human’s activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the users’ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption

    On the Integration of Adaptive and Interactive Robotic Smart Spaces

    Get PDF
    © 2015 Mauro Dragone et al.. This work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. (CC BY-NC-ND 3.0)Enabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs,but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.Peer reviewe

    Non-Invasive Ambient Intelligence in Real Life: Dealing with Noisy Patterns to Help Older People

    Get PDF
    This paper aims to contribute to the field of ambient intelligence from the perspective of real environments, where noise levels in datasets are significant, by showing how machine learning techniques can contribute to the knowledge creation, by promoting software sensors. The created knowledge can be actionable to develop features helping to deal with problems related to minimally labelled datasets. A case study is presented and analysed, looking to infer high-level rules, which can help to anticipate abnormal activities, and potential benefits of the integration of these technologies are discussed in this context. The contribution also aims to analyse the usage of the models for the transfer of knowledge when different sensors with different settings contribute to the noise levels. Finally, based on the authors’ experience, a framework proposal for creating valuable and aggregated knowledge is depicted.This research was partially funded by Fundación Tecnalia Research & Innovation, and J.O.-M. also wants to recognise the support obtained from the EU RFCS program through project number 793505 ‘4.0 Lean system integrating workers and processes (WISEST)’ and from the grant PRX18/00036 given by the Spanish Secretaría de Estado de Universidades, Investigación, Desarrollo e Innovación del Ministerio de Ciencia, Innovación y Universidades

    Behavior analysis for aging-in-place using similarity heatmaps

    Get PDF
    The demand for healthcare services for an increasing population of older adults is faced with the shortage of skilled caregivers and a constant increase in healthcare costs. In addition, the strong preference of the elderly to live independently has been driving much research on "ambient-assisted living" (AAL) systems to support aging-in-place. In this paper, we propose to employ a low-resolution image sensor network for behavior analysis of a home occupant. A network of 10 low-resolution cameras (30x30 pixels) is installed in a service flat of an elderly, based on which the user's mobility tracks are extracted using a maximum likelihood tracker. We propose a novel measure to find similar patterns of behavior between each pair of days from the user's detected positions, based on heatmaps and Earth mover's distance (EMD). Then, we use an exemplar-based approach to identify sleeping, eating, and sitting activities, and walking patterns of the elderly user for two weeks of real-life recordings. The proposed system achieves an overall accuracy of about 94%
    corecore