5,700 research outputs found
Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist
In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
SensX: About Sensing and Assessment of Complex Human Motion
The great success of wearables and smartphone apps for provision of extensive
physical workout instructions boosts a whole industry dealing with consumer
oriented sensors and sports equipment. But with these opportunities there are
also new challenges emerging. The unregulated distribution of instructions
about ambitious exercises enables unexperienced users to undertake demanding
workouts without professional supervision which may lead to suboptimal training
success or even serious injuries. We believe, that automated supervision and
realtime feedback during a workout may help to solve these issues. Therefore we
introduce four fundamental steps for complex human motion assessment and
present SensX, a sensor-based architecture for monitoring, recording, and
analyzing complex and multi-dimensional motion chains. We provide the results
of our preliminary study encompassing 8 different body weight exercises, 20
participants, and more than 9,220 recorded exercise repetitions. Furthermore,
insights into SensXs classification capabilities and the impact of specific
sensor configurations onto the analysis process are given.Comment: Published within the Proceedings of 14th IEEE International
Conference on Networking, Sensing and Control (ICNSC), May 16th-18th, 2017,
Calabria Italy 6 pages, 5 figure
Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers
In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier
Assessing the State of Self-Supervised Human Activity Recognition using Wearables
The emergence of self-supervised learning in the field of wearables-based
human activity recognition (HAR) has opened up opportunities to tackle the most
pressing challenges in the field, namely to exploit unlabeled data to derive
reliable recognition systems for scenarios where only small amounts of labeled
training samples can be collected. As such, self-supervision, i.e., the
paradigm of 'pretrain-then-finetune' has the potential to become a strong
alternative to the predominant end-to-end training approaches, let alone
hand-crafted features for the classic activity recognition chain. Recently a
number of contributions have been made that introduced self-supervised learning
into the field of HAR, including, Multi-task self-supervision, Masked
Reconstruction, CPC, and SimCLR, to name but a few. With the initial success of
these methods, the time has come for a systematic inventory and analysis of the
potential self-supervised learning has for the field. This paper provides
exactly that. We assess the progress of self-supervised HAR research by
introducing a framework that performs a multi-faceted exploration of model
performance. We organize the framework into three dimensions, each containing
three constituent criteria, such that each dimension captures specific aspects
of performance, including the robustness to differing source and target
conditions, the influence of dataset characteristics, and the feature space
characteristics. We utilize this framework to assess seven state-of-the-art
self-supervised methods for HAR, leading to the formulation of insights into
the properties of these techniques and to establish their value towards
learning representations for diverse scenarios.Comment: update
Surveying human habit modeling and mining techniques in smart spaces
A smart space is an environment, mainly equipped with Internet-of-Things (IoT) technologies, able to provide services to humans, helping them to perform daily tasks by monitoring the space and autonomously executing actions, giving suggestions and sending alarms. Approaches suggested in the literature may differ in terms of required facilities, possible applications, amount of human intervention required, ability to support multiple users at the same time adapting to changing needs. In this paper, we propose a Systematic Literature Review (SLR) that classifies most influential approaches in the area of smart spaces according to a set of dimensions identified by answering a set of research questions. These dimensions allow to choose a specific method or approach according to available sensors, amount of labeled data, need for visual analysis, requirements in terms of enactment and decision-making on the environment. Additionally, the paper identifies a set of challenges to be addressed by future research in the field
Towards Learning Discrete Representations via Self-Supervision for Wearables-Based Human Activity Recognition
Human activity recognition (HAR) in wearable computing is typically based on
direct processing of sensor data. Sensor readings are translated into
representations, either derived through dedicated preprocessing, or integrated
into end-to-end learning. Independent of their origin, for the vast majority of
contemporary HAR, those representations are typically continuous in nature.
That has not always been the case. In the early days of HAR, discretization
approaches have been explored - primarily motivated by the desire to minimize
computational requirements, but also with a view on applications beyond mere
recognition, such as, activity discovery, fingerprinting, or large-scale
search. Those traditional discretization approaches, however, suffer from
substantial loss in precision and resolution in the resulting representations
with detrimental effects on downstream tasks. Times have changed and in this
paper we propose a return to discretized representations. We adopt and apply
recent advancements in Vector Quantization (VQ) to wearables applications,
which enables us to directly learn a mapping between short spans of sensor data
and a codebook of vectors, resulting in recognition performance that is
generally on par with their contemporary, continuous counterparts - sometimes
surpassing them. Therefore, this work presents a proof-of-concept for
demonstrating how effective discrete representations can be derived, enabling
applications beyond mere activity classification but also opening up the field
to advanced tools for the analysis of symbolic sequences, as they are known,
for example, from domains such as natural language processing. Based on an
extensive experimental evaluation on a suite of wearables-based benchmark HAR
tasks, we demonstrate the potential of our learned discretization scheme and
discuss how discretized sensor data analysis can lead to substantial changes in
HAR
- …