5,496 research outputs found
Real-time human ambulation, activity, and physiological monitoring:taxonomy of issues, techniques, applications, challenges and limitations
Automated methods of real-time, unobtrusive, human ambulation, activity, and wellness monitoring and data analysis using various algorithmic techniques have been subjects of intense research. The general aim is to devise effective means of addressing the demands of assisted living, rehabilitation, and clinical observation and assessment through sensor-based monitoring. The research studies have resulted in a large amount of literature. This paper presents a holistic articulation of the research studies and offers comprehensive insights along four main axes: distribution of existing studies; monitoring device framework and sensor types; data collection, processing and analysis; and applications, limitations and challenges. The aim is to present a systematic and most complete study of literature in the area in order to identify research gaps and prioritize future research directions
Towards a Practical Pedestrian Distraction Detection Framework using Wearables
Pedestrian safety continues to be a significant concern in urban communities
and pedestrian distraction is emerging as one of the main causes of grave and
fatal accidents involving pedestrians. The advent of sophisticated mobile and
wearable devices, equipped with high-precision on-board sensors capable of
measuring fine-grained user movements and context, provides a tremendous
opportunity for designing effective pedestrian safety systems and applications.
Accurate and efficient recognition of pedestrian distractions in real-time
given the memory, computation and communication limitations of these devices,
however, remains the key technical challenge in the design of such systems.
Earlier research efforts in pedestrian distraction detection using data
available from mobile and wearable devices have primarily focused only on
achieving high detection accuracy, resulting in designs that are either
resource intensive and unsuitable for implementation on mainstream mobile
devices, or computationally slow and not useful for real-time pedestrian safety
applications, or require specialized hardware and less likely to be adopted by
most users. In the quest for a pedestrian safety system that achieves a
favorable balance between computational efficiency, detection accuracy, and
energy consumption, this paper makes the following main contributions: (i)
design of a novel complex activity recognition framework which employs motion
data available from users' mobile and wearable devices and a lightweight
frequency matching approach to accurately and efficiently recognize complex
distraction related activities, and (ii) a comprehensive comparative evaluation
of the proposed framework with well-known complex activity recognition
techniques in the literature with the help of data collected from human subject
pedestrians and prototype implementations on commercially-available mobile and
wearable devices
An Empirical Study Comparing Unobtrusive Physiological Sensors for Stress Detection in Computer Work.
Several unobtrusive sensors have been tested in studies to capture physiological reactions to stress in workplace settings. Lab studies tend to focus on assessing sensors during a specific computer task, while in situ studies tend to offer a generalized view of sensors' efficacy for workplace stress monitoring, without discriminating different tasks. Given the variation in workplace computer activities, this study investigates the efficacy of unobtrusive sensors for stress measurement across a variety of tasks. We present a comparison of five physiological measurements obtained in a lab experiment, where participants completed six different computer tasks, while we measured their stress levels using a chest-band (ECG, respiration), a wristband (PPG and EDA), and an emerging thermal imaging method (perinasal perspiration). We found that thermal imaging can detect increased stress for most participants across all tasks, while wrist and chest sensors were less generalizable across tasks and participants. We summarize the costs and benefits of each sensor stream, and show how some computer use scenarios present usability and reliability challenges for stress monitoring with certain physiological sensors. We provide recommendations for researchers and system builders for measuring stress with physiological sensors during workplace computer use
SensX: About Sensing and Assessment of Complex Human Motion
The great success of wearables and smartphone apps for provision of extensive
physical workout instructions boosts a whole industry dealing with consumer
oriented sensors and sports equipment. But with these opportunities there are
also new challenges emerging. The unregulated distribution of instructions
about ambitious exercises enables unexperienced users to undertake demanding
workouts without professional supervision which may lead to suboptimal training
success or even serious injuries. We believe, that automated supervision and
realtime feedback during a workout may help to solve these issues. Therefore we
introduce four fundamental steps for complex human motion assessment and
present SensX, a sensor-based architecture for monitoring, recording, and
analyzing complex and multi-dimensional motion chains. We provide the results
of our preliminary study encompassing 8 different body weight exercises, 20
participants, and more than 9,220 recorded exercise repetitions. Furthermore,
insights into SensXs classification capabilities and the impact of specific
sensor configurations onto the analysis process are given.Comment: Published within the Proceedings of 14th IEEE International
Conference on Networking, Sensing and Control (ICNSC), May 16th-18th, 2017,
Calabria Italy 6 pages, 5 figure
An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data
The current gold standard for human activity recognition (HAR) is based on
the use of cameras. However, the poor scalability of camera systems renders
them impractical in pursuit of the goal of wider adoption of HAR in mobile
computing contexts. Consequently, researchers instead rely on wearable sensors
and in particular inertial sensors. A particularly prevalent wearable is the
smart watch which due to its integrated inertial and optical sensing
capabilities holds great potential for realising better HAR in a non-obtrusive
way. This paper seeks to simplify the wearable approach to HAR through
determining if the wrist-mounted optical sensor alone typically found in a
smartwatch or similar device can be used as a useful source of data for
activity recognition. The approach has the potential to eliminate the need for
the inertial sensing element which would in turn reduce the cost of and
complexity of smartwatches and fitness trackers. This could potentially
commoditise the hardware requirements for HAR while retaining the functionality
of both heart rate monitoring and activity capture all from a single optical
sensor. Our approach relies on the adoption of machine vision for activity
recognition based on suitably scaled plots of the optical signals. We take this
approach so as to produce classifications that are easily explainable and
interpretable by non-technical users. More specifically, images of
photoplethysmography signal time series are used to retrain the penultimate
layer of a convolutional neural network which has initially been trained on the
ImageNet database. We then use the 2048 dimensional features from the
penultimate layer as input to a support vector machine. Results from the
experiment yielded an average classification accuracy of 92.3%. This result
outperforms that of an optical and inertial sensor combined (78%) and
illustrates the capability of HAR systems using...Comment: 26th AIAI Irish Conference on Artificial Intelligence and Cognitive
Scienc
Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers
In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier
- …