7,260 research outputs found

    A wearable and non-wearable approach for gesture recognition: initial results

    Get PDF
    A natural way of communication between humans are gestures. Through this type of non-verbal communication, the human interaction may change since it is possible to send a particular message or capture the attention of the other peer. In the human-computer interaction the capture of such gestures has been a topic of interest where the goal is to classify human gestures in different scenarios. Applying machine learning techniques, one may be able to track and recognize human gestures and use the gathered information to assess the medical condition of a person regarding, for example, motor impairments. According to the type of movement and to the target population one may use different wearable or non-wearable sensors. In this work, we are using a hybrid approach for automatically detecting the ball throwing movement by applying a Microsoft Kinect (non-wearable) and the Pandlet (set of wearable sensors such as accelerometer, gyroscope, among others). After creating a dataset of 10 participants, a SVM model with a DTW kernel is trained and used as a classification tool. The system performance was quantified in terms of confusion matrix, accuracy, sensitivity and specificity, Area Under the Curve, and Mathews Correlation Coefficient metrics. The obtained results point out that the present system is able to recognize the selected throwing gestures and that the overall performance of the Kinect is better compared to the Pandlet.This article is a result of the project Deus Ex Machina: NORTE-01-0145-FEDER-000026, supported by Norte Portugal Regional Operational Program (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, through the European Regional Development Fund (ERDF).info:eu-repo/semantics/publishedVersio

    Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition

    Get PDF
    Human activity recognition (HAR) tasks have traditionally been solved using engineered features obtained by heuristic processes. Current research suggests that deep convolutional neural networks are suited to automate feature extraction from raw sensor inputs. However, human activities are made of complex sequences of motor movements, and capturing this temporal dynamics is fundamental for successful HAR. Based on the recent success of recurrent neural networks for time series domains, we propose a generic deep framework for activity recognition based on convolutional and LSTM recurrent units, which: (i) is suitable for multimodal wearable sensors; (ii) can perform sensor fusion naturally; (iii) does not require expert knowledge in designing features; and (iv) explicitly models the temporal dynamics of feature activations. We evaluate our framework on two datasets, one of which has been used in a public activity recognition challenge. Our results show that our framework outperforms competing deep non-recurrent networks on the challenge dataset by 4% on average; outperforming some of the previous reported results by up to 9%. Our results show that the framework can be applied to homogeneous sensor modalities, but can also fuse multimodal sensors to improve performance. We characterise key architectural hyperparameters’ influence on performance to provide insights about their optimisation

    A fast and robust hand-driven 3D mouse

    Get PDF
    The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed syste

    Sensor Sleeve: Sensing Affective Gestures

    Full text link
    We describe the use of textile sensors mounted in a garment sleeve to detect affective gestures. The `Sensor Sleeve' is part of a larger project to explore the role of affect in communications. Pressure activated, capacitive and elasto-resistive sensors are investigated and their relative merits reported on. An implemented application is outlined in which a cellphone receives messages derived from the sleeve's sensors using a Bluetooth interface, and relays the signals as text messages to the user's nominated partner
    • 

    corecore