1,583 research outputs found

    Learning Action Maps of Large Environments via First-Person Vision

    Full text link
    When people observe and interact with physical spaces, they are able to associate functionality to regions in the environment. Our goal is to automate dense functional understanding of large spaces by leveraging sparse activity demonstrations recorded from an ego-centric viewpoint. The method we describe enables functionality estimation in large scenes where people have behaved, as well as novel scenes where no behaviors are observed. Our method learns and predicts "Action Maps", which encode the ability for a user to perform activities at various locations. With the usage of an egocentric camera to observe human activities, our method scales with the size of the scene without the need for mounting multiple static surveillance cameras and is well-suited to the task of observing activities up-close. We demonstrate that by capturing appearance-based attributes of the environment and associating these attributes with activity demonstrations, our proposed mathematical framework allows for the prediction of Action Maps in new environments. Additionally, we offer a preliminary glance of the applicability of Action Maps by demonstrating a proof-of-concept application in which they are used in concert with activity detections to perform localization.Comment: To appear at CVPR 201

    Unsupervised monitoring of an elderly person\u27s activities of daily living using Kinect sensors and a power meter

    Get PDF
    The need for greater independence amongst the growing population of elderly people has made the concept of “ageing in place” an important area of research. Remote home monitoring strategies help the elderly deal with challenges involved in ageing in place and performing the activities of daily living (ADLs) independently. These monitoring approaches typically involve the use of several sensors, attached to the environment or person, in order to acquire data about the ADLs of the occupant being monitored. Some key drawbacks associated with many of the ADL monitoring approaches proposed for the elderly living alone need to be addressed. These include the need to label a training dataset of activities, use wearable devices or equip the house with many sensors. These approaches are also unable to concurrently monitor physical ADLs to detect emergency situations, such as falls, and instrumental ADLs to detect deviations from the daily routine. These are all indicative of deteriorating health in the elderly. To address these drawbacks, this research aimed to investigate the feasibility of unsupervised monitoring of both physical and instrumental ADLs of elderly people living alone via inexpensive minimally intrusive sensors. A hybrid framework was presented which combined two approaches for monitoring an elderly occupant’s physical and instrumental ADLs. Both approaches were trained based on unlabelled sensor data from the occupant’s normal behaviours. The data related to physical ADLs were captured from Kinect sensors and those related to instrumental ADLs were obtained using a combination of Kinect sensors and a power meter. Kinect sensors were employed in functional areas of the monitored environment to capture the occupant’s locations and 3D structures of their physical activities. The power meter measured the power consumption of home electrical appliances (HEAs) from the electricity panel. A novel unsupervised fuzzy approach was presented to monitor physical ADLs based on depth maps obtained from Kinect sensors. Epochs of activities associated with each monitored location were automatically identified, and the occupant’s behaviour patterns during each epoch were represented through the combinations of fuzzy attributes. A novel membership function generation technique was presented to elicit membership functions for attributes by analysing the data distribution of attributes while excluding noise and outliers in the data. The occupant’s behaviour patterns during each epoch of activity were then classified into frequent and infrequent categories using a data mining technique. Fuzzy rules were learned to model frequent behaviour patterns. An alarm was raised when the occupant’s behaviour in new data was recognised as frequent with a longer than usual duration or infrequent with a duration exceeding a data-driven value. Another novel unsupervised fuzzy approach to monitor instrumental ADLs took unlabelled training data from Kinect sensors and a power meter to model the key features of instrumental ADLs. Instrumental ADLs in the training dataset were identified based on associating the occupant’s locations with specific power signatures on the power line. A set of fuzzy rules was then developed to model the frequency and regularity of the instrumental activities tailored to the occupant. This set was subsequently used to monitor new data and to generate reports on deviations from normal behaviour patterns. As a proof of concept, the proposed monitoring approaches were evaluated using a dataset collected from a real-life setting. An evaluation of the results verified the high accuracy of the proposed technique to identify the epochs of activities over alternative techniques. The approach adopted for monitoring physical ADLs was found to improve elderly monitoring. It generated fuzzy rules that could represent the person’s physical ADLs and exclude noise and outliers in the data more efficiently than alternative approaches. The performance of different membership function generation techniques was compared. The fuzzy rule set obtained from the output of the proposed technique could accurately classify more scenarios of normal and abnormal behaviours. The approach for monitoring instrumental ADLs was also found to reliably distinguish power signatures generated automatically by self-regulated devices from those generated as a result of an elderly person’s instrumental ADLs. The evaluations also showed the effectiveness of the approach in correctly identifying elderly people’s interactions with specific HEAs and tracking simulated upward and downward deviations from normal behaviours. The fuzzy inference system in this approach was found to be robust in regards to errors when identifying instrumental ADLs as it could effectively classify normal and abnormal behaviour patterns despite errors in the list of the used HEAs

    Emotions in context: examining pervasive affective sensing systems, applications, and analyses

    Get PDF
    Pervasive sensing has opened up new opportunities for measuring our feelings and understanding our behavior by monitoring our affective states while mobile. This review paper surveys pervasive affect sensing by examining and considering three major elements of affective pervasive systems, namely; “sensing”, “analysis”, and “application”. Sensing investigates the different sensing modalities that are used in existing real-time affective applications, Analysis explores different approaches to emotion recognition and visualization based on different types of collected data, and Application investigates different leading areas of affective applications. For each of the three aspects, the paper includes an extensive survey of the literature and finally outlines some of challenges and future research opportunities of affective sensing in the context of pervasive computing

    Indoor localisation through object detection within multiple environments utilising a single wearable camera

    Get PDF
    The recent growth in the wearable sensor market has stimulated new opportunities within the domain of Ambient Assisted Living, providing unique methods of collecting occupant information. This approach leverages contemporary wearable technology, Google Glass, to facilitate a unique first-person view of the occupants immediate environment. Machine vision techniques are employed to determine an occupant’s location via environmental object detection. This method provides additional secondary benefits such as first person tracking within the environment and lack of required sensor interaction to determine occupant location. Object recognition is performed using the Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features algorithm with a K-Nearest Neighbour matcher to match the saved key-points of the objects to the scene. To validate the approach, an experimental set-up consisting of three ADL routines, each containing at least ten activities, ranging from drinking water to making a meal were considered. Ground truth was obtained from manually annotated video data and the approach was previously benchmarked against a common method of indoor localisation that employs dense sensor placement in order to validate the approach resulting in a recall, precision, and F-measure of 0.82, 0.96, and 0.88 respectively. This paper will go on to assess to the viability of applying the solution to differing environments, both in terms of performance and along with a qualitative analysis on the practical aspects of installing such a system within differing environments

    Augmenting human memory using personal lifelogs

    Get PDF
    Memory is a key human facility to support life activities, including social interactions, life management and problem solving. Unfortunately, our memory is not perfect. Normal individuals will have occasional memory problems which can be frustrating, while those with memory impairments can often experience a greatly reduced quality of life. Augmenting memory has the potential to make normal individuals more effective, and those with significant memory problems to have a higher general quality of life. Current technologies are now making it possible to automatically capture and store daily life experiences over an extended period, potentially even over a lifetime. This type of data collection, often referred to as a personal life log (PLL), can include data such as continuously captured pictures or videos from a first person perspective, scanned copies of archival material such as books, electronic documents read or created, and emails and SMS messages sent and received, along with context data of time of capture and access and location via GPS sensors. PLLs offer the potential for memory augmentation. Existing work on PLLs has focused on the technologies of data capture and retrieval, but little work has been done to explore how these captured data and retrieval techniques can be applied to actual use by normal people in supporting their memory. In this paper, we explore the needs for augmenting human memory from normal people based on the psychology literature on mechanisms about memory problems, and discuss the possible functions that PLLs can provide to support these memory augmentation needs. Based on this, we also suggest guidelines for data for capture, retrieval needs and computer-based interface design. Finally we introduce our work-in-process prototype PLL search system in the iCLIPS project to give an example of augmenting human memory with PLLs and computer based interfaces

    Human Action Recognition with RGB-D Sensors

    Get PDF
    none3noHuman action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset.Cippitelli, Enea; Gambi, Ennio; Spinsante, SusannaCippitelli, Enea; Gambi, Ennio; Spinsante, Susann

    Experiencing SenseCam: a case study interview exploring seven years living with a wearable camera

    Get PDF
    This paper presents the findings from an interview with CG, an individual who has worn an automated camera, the SenseCam, every day for the past seven years. Of interest to the study were the participant’s day-to-day experiences wearing the camera and whether these had changed since first wearing the camera. The findings presented outline the effect that wearing the camera has on his self-identity, relationships and interactions with people in the public. Issues relating to data capture, transfer and retrieval of lifelog images are also identified. These experiences inform us of the long-term effects of digital life capture and how lifelogging could progress in the future
    • 

    corecore