2,690 research outputs found
A Review of Physical Human Activity Recognition Chain Using Sensors
In the era of Internet of Medical Things (IoMT), healthcare monitoring has gained a vital role nowadays. Moreover, improving lifestyle, encouraging healthy behaviours, and decreasing the chronic diseases are urgently required. However, tracking and monitoring critical cases/conditions of elderly and patients is a great challenge. Healthcare services for those people are crucial in order to achieve high safety consideration. Physical human activity recognition using wearable devices is used to monitor and recognize human activities for elderly and patient. The main aim of this review study is to highlight the human activity recognition chain, which includes, sensing technologies, preprocessing and segmentation, feature extractions methods, and classification techniques. Challenges and future trends are also highlighted.
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Higher order feature extraction and selection for robust human gesture recognition using CSI of COTS Wi-Fi devices
Device-free human gesture recognition (HGR) using commercial o the shelf (COTS) Wi-Fi
devices has gained attention with recent advances in wireless technology. HGR recognizes the human
activity performed, by capturing the reflections ofWi-Fi signals from moving humans and storing
them as raw channel state information (CSI) traces. Existing work on HGR applies noise reduction
and transformation to pre-process the raw CSI traces. However, these methods fail to capture
the non-Gaussian information in the raw CSI data due to its limitation to deal with linear signal
representation alone. The proposed higher order statistics-based recognition (HOS-Re) model extracts
higher order statistical (HOS) features from raw CSI traces and selects a robust feature subset for the
recognition task. HOS-Re addresses the limitations in the existing methods, by extracting third order
cumulant features that maximizes the recognition accuracy. Subsequently, feature selection methods
derived from information theory construct a robust and highly informative feature subset, fed as
input to the multilevel support vector machine (SVM) classifier in order to measure the performance.
The proposed methodology is validated using a public database SignFi, consisting of 276 gestures
with 8280 gesture instances, out of which 5520 are from the laboratory and 2760 from the home
environment using a 10 5 cross-validation. HOS-Re achieved an average recognition accuracy of
97.84%, 98.26% and 96.34% for the lab, home and lab + home environment respectively. The average
recognition accuracy for 150 sign gestures with 7500 instances, collected from five di erent users was
96.23% in the laboratory environment.Taylor's University through its TAYLOR'S PhD SCHOLARSHIP Programmeinfo:eu-repo/semantics/publishedVersio
- …