663,837 research outputs found

    Social Activity Recognition on Continuous RGB-D Video Sequences

    Get PDF
    Modern service robots are provided with one or more sensors, often including RGB-D cameras, to perceive objects and humans in the environment. This paper proposes a new system for the recognition of human social activities from a continuous stream of RGB-D data. Many of the works until now have succeeded in recognising activities from clipped videos in datasets, but for robotic applications it is important to be able to move to more realistic scenarios in which such activities are not manually selected. For this reason, it is useful to detect the time intervals when humans are performing social activities, the recognition of which can contribute to trigger human-robot interactions or to detect situations of potential danger. The main contributions of this research work include a novel system for the recognition of social activities from continuous RGB-D data, combining temporal segmentation and classification, as well as a model for learning the proximity-based priors of the social activities. A new public dataset with RGB-D videos of social and individual activities is also provided and used for evaluating the proposed solutions. The results show the good performance of the system in recognising social activities from continuous RGB-D data

    Human activity recognition from object interaction in domestic scenarios

    Get PDF
    This paper presents a real time approach to the recognition of human activity based on the interaction between people and objects in domestic settings, specifically in a kitchen. Regarding the procedure, it is based on capturing partial images where the activity takes place using a colour camera, and processing the images to recognize the present objects and their location. For object description and recognition, a histogram on rg chromaticity space has been selected. The interaction with the objects is classified into four types of possible actions; (unchanged, add, remove or move). Activities are defined as recipes, where objects play the role of ingredients, tools or substitutes. Sensed objects and actions are then used to analyze in real time the probability of the human activity performed at a particular moment in a continuous activity sequence.Peer ReviewedPostprint (author's final draft

    The role of task-supported language teaching in EFL learner’s writing performance and grammar gains

    Get PDF
    Recent research in SLA advocates the use of task as a useful class activity claiming that task approximates language use in the context of classroom to the way language is used in the real world. Framed under a cognitive framework to task-based language teaching, this study was set out to investigate whether task-based oriented activities bear any superiority to that of more traditional ones evident in PPP (Presentation-Practice-Production) model. Twenty eight female pre-intermediate participants studying English in one language school in Urmia, Iran, took part in the study. They participated in ten half-an-hour long sessions of instruction during which they were instructed four structural points: simple past, simple present, present continuous, and ‘There is/There are/How much/How many’ structures. PPP group received their treatment through conventional approach and task-based group, through task-oriented activities. The quantitative analysis performed on the post-test (consisting of a grammar recognition test and a writing activity) suggested that participants in the PPP group did significantly better in the grammar recognition section of the post-test. However, their counterparts in the task group gained better scores in the writing section of the test. Further findings and implications are discussed in the paper

    Continuous human motion recognition with a dynamic range-Doppler trajectory method based on FMCW radar

    Get PDF
    Radar-based human motion recognition is crucial for many applications, such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e., classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel dynamic range-Doppler trajectory (DRDT) method based on the frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range, and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar cross section (RCS), and dispersion features are extracted and combined in a multidomain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even in various conditions of distance, view angle, direction, and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification

    Fine-grained Activity Classification In Assembly Based On Multi-visual Modalities

    Get PDF
    Assembly activity recognition and prediction help to improve productivity, quality control, and safety measures in smart factories. This study aims to sense, recognize, and predict a worker\u27s continuous fine-grained assembly activities in a manufacturing platform. We propose a two-stage network for workers\u27 fine-grained activity classification by leveraging scene-level and temporal-level activity features. The first stage is a feature awareness block that extracts scene-level features from multi-visual modalities, including red, green blue (RGB) and hand skeleton frames. We use the transfer learning method in the first stage and compare three different pre-trained feature extraction models. Then, we transmit the feature information from the first stage to the second stage to learn the temporal-level features of activities. The second stage consists of the Recurrent Neural Network (RNN) layers and a final classifier. We compare the performance of two different RNNs in the second stage, including the Long Short-Term Memory (LSTM) and the Gated Recurrent Unit (GRU). The partial video observation method is used in the prediction of fine-grained activities. In the experiments using the trimmed activity videos, our model achieves an accuracy of \u3e 99% on our dataset and \u3e 98% on the public dataset UCF 101, outperforming the state-of-the-art models. The prediction model achieves an accuracy of \u3e 97% in predicting activity labels using 50% of the onset activity video information. In the experiments using an untrimmed video with continuous assembly activities, we combine our recognition and prediction models and achieve an accuracy of \u3e 91% in real time, surpassing the state-of-the-art models for the recognition of continuous assembly activities

    Two-Stage Human Activity Recognition Using 2D-ConvNet

    Get PDF
    There is huge requirement of continuous intelligent monitoring system for human activity recognition in various domains like public places, automated teller machines or healthcare sector. Increasing demand of automatic recognition of human activity in these sectors and need to reduce the cost involved in manual surveillance have motivated the research community towards deep learning techniques so that a smart monitoring system for recognition of human activities can be designed and developed. Because of low cost, high resolution and ease of availability of surveillance cameras, the authors developed a new two-stage intelligent framework for detection and recognition of human activity types inside the premises. This paper, introduces a novel framework to recognize single-limb and multi-limb human activities using a Convolution Neural Network. In the first phase single-limb and multi-limb activities are separated. Next, these separated single and multi-limb activities have been recognized using sequence-classification. For training and validation of our framework we have used the UTKinect-Action Dataset having 199 actions sequences performed by 10 users. We have achieved an overall accuracy of 97.88% in real-time recognition of the activity sequences
    • …
    corecore