72,186 research outputs found

    Mechanisms, Causes and the Layered Model of the World

    Get PDF
    Most philosophical accounts of causation take causal relations to obtain between individuals and events in virtue of nomological relations between properties of these individuals and events. Such views fail to take into account the consequences of the fact that in general the properties of individuals and events will depend upon mechanisms that realize those properties. In this paper I attempt to rectify this failure, and in so doing to provide an account of the causal relevance of higher-level properties. I do this by critiquing one prominent model of higher-level properties – Kim’s functional model of reduction – and contrasting it with a mechanistic approach to higher-level properties and causation

    Behavior analysis for aging-in-place using similarity heatmaps

    Get PDF
    The demand for healthcare services for an increasing population of older adults is faced with the shortage of skilled caregivers and a constant increase in healthcare costs. In addition, the strong preference of the elderly to live independently has been driving much research on "ambient-assisted living" (AAL) systems to support aging-in-place. In this paper, we propose to employ a low-resolution image sensor network for behavior analysis of a home occupant. A network of 10 low-resolution cameras (30x30 pixels) is installed in a service flat of an elderly, based on which the user's mobility tracks are extracted using a maximum likelihood tracker. We propose a novel measure to find similar patterns of behavior between each pair of days from the user's detected positions, based on heatmaps and Earth mover's distance (EMD). Then, we use an exemplar-based approach to identify sleeping, eating, and sitting activities, and walking patterns of the elderly user for two weeks of real-life recordings. The proposed system achieves an overall accuracy of about 94%

    Human activity recognition from object interaction in domestic scenarios

    Get PDF
    This paper presents a real time approach to the recognition of human activity based on the interaction between people and objects in domestic settings, specifically in a kitchen. Regarding the procedure, it is based on capturing partial images where the activity takes place using a colour camera, and processing the images to recognize the present objects and their location. For object description and recognition, a histogram on rg chromaticity space has been selected. The interaction with the objects is classified into four types of possible actions; (unchanged, add, remove or move). Activities are defined as recipes, where objects play the role of ingredients, tools or substitutes. Sensed objects and actions are then used to analyze in real time the probability of the human activity performed at a particular moment in a continuous activity sequence.Peer ReviewedPostprint (author's final draft

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    3D Human Activity Recognition with Reconfigurable Convolutional Neural Networks

    Full text link
    Human activity understanding with 3D/depth sensors has received increasing attention in multimedia processing and interactions. This work targets on developing a novel deep model for automatic activity recognition from RGB-D videos. We represent each human activity as an ensemble of cubic-like video segments, and learn to discover the temporal structures for a category of activities, i.e. how the activities to be decomposed in terms of classification. Our model can be regarded as a structured deep architecture, as it extends the convolutional neural networks (CNNs) by incorporating structure alternatives. Specifically, we build the network consisting of 3D convolutions and max-pooling operators over the video segments, and introduce the latent variables in each convolutional layer manipulating the activation of neurons. Our model thus advances existing approaches in two aspects: (i) it acts directly on the raw inputs (grayscale-depth data) to conduct recognition instead of relying on hand-crafted features, and (ii) the model structure can be dynamically adjusted accounting for the temporal variations of human activities, i.e. the network configuration is allowed to be partially activated during inference. For model training, we propose an EM-type optimization method that iteratively (i) discovers the latent structure by determining the decomposed actions for each training example, and (ii) learns the network parameters by using the back-propagation algorithm. Our approach is validated in challenging scenarios, and outperforms state-of-the-art methods. A large human activity database of RGB-D videos is presented in addition.Comment: This manuscript has 10 pages with 9 figures, and a preliminary version was published in ACM MM'14 conferenc
    • …
    corecore