2,075 research outputs found

    Higher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devices

    Get PDF
    Device-free human gesture recognition (HGR) using commercial o the shelf (COTS) Wi-Fi devices has gained attention with recent advances in wireless technology. HGR recognizes the human activity performed, by capturing the reflections ofWi-Fi signals from moving humans and storing them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction and transformation to pre-process the raw CSI traces. However, these methods fail to capture the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods derived from information theory construct a robust and highly informative feature subset, fed as input to the multilevel support vector machine (SVM) classifier in order to measure the performance. The proposed methodology is validated using a public database SignFi, consisting of 276 gestures with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home environment using a 10 5 cross-validation. HOS-Re achieved an average recognition accuracy of 97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average recognition accuracy for 150 sign gestures with 7500 instances, collected from five di erent users was 96.23% in the laboratory environment.Taylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmeinfo:eu-repo/semantics/publishedVersio

    Loss of agency in apraxia

    Get PDF
    The feeling of acting voluntarily is a fundamental component of human behavior and social life and is usually accompanied by a sense of agency. However, this ability can be impaired in a number of diseases and disorders. An important example is apraxia, a disturbance traditionally defined as a disorder of voluntary skillful movements that often results from frontal-parietal brain damage. The first part of this article focuses on direct evidence of some core symptoms of apraxia, emphasizing those with connections to agency and free will. The loss of agency in apraxia is reflected in the monitoring of internally driven action, in the perception of specifically self-intended movements and in the neural intention to act. The second part presents an outline of the evidences supporting the functional and anatomical link between apraxia and agency. The available structural and functional results converge to reveal that the frontal-parietal network contributes to the sense of agency and its impairment in disorders such as apraxia. The current knowledge on the generation of motor intentions and action monitoring could potentially be applied to develop therapeutic strategies for the clinical rehabilitation of voluntary action

    Convolutional neural networks for hand gesture recognition with off-the-shelf radar sensor

    Get PDF
    openL'elaborato espone l'attività di ricerca riguardante l'applicazione di metodologie di Machine Learning per risolvere un problema di interazione uomo-macchina. L'obiettivo è riconoscere e classificare correttamente dei movimenti della mano eseguiti da un utente, i quali vengono catturati tramite un sensore radar. Il segnale viene successivamente processato e dato in input ad una rete neurale convoluzionale, seguita da un classificatore volto a riconoscere il movimento che viene eseguito.The thesis explains the research and metholodogies applied in order to solve a Human-Computer Interaction task by means of Machine Learning techniques. The goal is to recognize and classify hand gestures performed by the user, which are acquired with a radar sensor. The signal is then processed and given as input to a convolutional neural network, followed by a fully connected classifier that should be able to classify correctly the movement

    Body swarm interface (BOSI) : controlling robotic swarms using human bio-signals

    Get PDF
    Traditionally robots are controlled using devices like joysticks, keyboards, mice and other similar human computer interface (HCI) devices. Although this approach is effective and practical for some cases, it is restrictive only to healthy individuals without disabilities, and it also requires the user to master the device before its usage. It becomes complicated and non-intuitive when multiple robots need to be controlled simultaneously with these traditional devices, as in the case of Human Swarm Interfaces (HSI). This work presents a novel concept of using human bio-signals to control swarms of robots. With this concept there are two major advantages: Firstly, it gives amputees and people with certain disabilities the ability to control robotic swarms, which has previously not been possible. Secondly, it also gives the user a more intuitive interface to control swarms of robots by using gestures, thoughts, and eye movement. We measure different bio-signals from the human body including Electroencephalography (EEG), Electromyography (EMG), Electrooculography (EOG), using off the shelf products. After minimal signal processing, we then decode the intended control action using machine learning techniques like Hidden Markov Models (HMM) and K-Nearest Neighbors (K-NN). We employ formation controllers based on distance and displacement to control the shape and motion of the robotic swarm. Comparison for ground truth for thoughts and gesture classifications are done, and the resulting pipelines are evaluated with both simulations and hardware experiments with swarms of ground robots and aerial vehicles

    Fusion of wearable and contactless sensors for intelligent gesture recognition

    Get PDF
    This paper presents a novel approach of fusing datasets from multiple sensors using a hierarchical support vector machine algorithm. The validation of this method was experimentally carried out using an intelligent learning system that combines two different data sources. The sensors are based on a contactless sensor, which is a radar that detects the movements of the hands and fingers, as well as a wearable sensor, which is a flexible pressure sensor array that measures pressure distribution around the wrist. A hierarchical support vector machine architecture has been developed to effectively fuse different data types in terms of sampling rate, data format and gesture information from the pressure sensors and radar. In this respect, the proposed method was compared with the classification results from each of the two sensors independently. Datasets from 15 different participants were collected and analyzed in this work. The results show that the radar on its own provides a mean classification accuracy of 76.7%, while the pressure sensors provide an accuracy of 69.0%. However, enhancing the pressure sensors’ output results with radar using the proposed hierarchical support vector machine algorithm improves the classification accuracy to 92.5%

    A Survey of Applications and Human Motion Recognition with Microsoft Kinect

    Get PDF
    Microsoft Kinect, a low-cost motion sensing device, enables users to interact with computers or game consoles naturally through gestures and spoken commands without any other peripheral equipment. As such, it has commanded intense interests in research and development on the Kinect technology. In this paper, we present, a comprehensive survey on Kinect applications, and the latest research and development on motion recognition using data captured by the Kinect sensor. On the applications front, we review the applications of the Kinect technology in a variety of areas, including healthcare, education and performing arts, robotics, sign language recognition, retail services, workplace safety training, as well as 3D reconstructions. On the technology front, we provide an overview of the main features of both versions of the Kinect sensor together with the depth sensing technologies used, and review literatures on human motion recognition techniques used in Kinect applications. We provide a classification of motion recognition techniques to highlight the different approaches used in human motion recognition. Furthermore, we compile a list of publicly available Kinect datasets. These datasets are valuable resources for researchers to investigate better methods for human motion recognition and lower-level computer vision tasks such as segmentation, object detection and human pose estimation
    • …
    corecore