57 research outputs found
Towards Using Unlabeled Data in a Sparse-coding Framework for Human Activity Recognition
We propose a sparse-coding framework for activity recognition in ubiquitous
and mobile computing that alleviates two fundamental problems of current
supervised learning approaches. (i) It automatically derives a compact, sparse
and meaningful feature representation of sensor data that does not rely on
prior expert knowledge and generalizes extremely well across domain boundaries.
(ii) It exploits unlabeled sample data for bootstrapping effective activity
recognizers, i.e., substantially reduces the amount of ground truth annotation
required for model estimation. Such unlabeled data is trivial to obtain, e.g.,
through contemporary smartphones carried by users as they go about their
everyday activities.
Based on the self-taught learning paradigm we automatically derive an
over-complete set of basis vectors from unlabeled data that captures inherent
patterns present within activity data. Through projecting raw sensor data onto
the feature space defined by such over-complete sets of basis vectors effective
feature extraction is pursued. Given these learned feature representations,
classification backends are then trained using small amounts of labeled
training data.
We study the new approach in detail using two datasets which differ in terms
of the recognition tasks and sensor modalities. Primarily we focus on
transportation mode analysis task, a popular task in mobile-phone based
sensing. The sparse-coding framework significantly outperforms the
state-of-the-art in supervised learning approaches. Furthermore, we demonstrate
the great practical potential of the new approach by successfully evaluating
its generalization capabilities across both domain and sensor modalities by
considering the popular Opportunity dataset. Our feature learning approach
outperforms state-of-the-art approaches to analyzing activities in daily
living.Comment: 18 pages, 12 figures, Pervasive and Mobile Computing, 201
Dealing with the effects of sensor displacement in wearable activity recognition
Most wearable activity recognition systems assume a predefined sensor deployment that remains unchanged during runtime. However, this assumption does not reflect real-life conditions. During the normal use of such systems, users may place the sensors in a position different from the predefined sensor placement. Also, sensors may move from their original location to a different one, due to a loose attachment. Activity recognition systems trained on activity patterns characteristic of a given sensor deployment may likely fail due to sensor displacements. In this work, we innovatively explore the effects of sensor displacement induced by both the intentional misplacement of sensors and self-placement by the user. The effects of sensor displacement are analyzed for standard activity recognition techniques, as well as for an alternate robust sensor fusion method proposed in a previous work. While classical recognition models show little tolerance to sensor displacement, the proposed method is proven to have notable capabilities to assimilate the changes introduced in the sensor position due to self-placement and provides considerable improvements for large misplacements.This work was supported by the High Performance Computing (HPC)-Europa2 project funded by the European Commission-DG Research in the Seventh Framework Programme under grant agreement No. 228398 and by the EU Marie Curie Network iCareNet under grant No. 264738. This work was also supported by the Spanish Comision Interministerial de Ciencia y Tecnologia (CICYT) Project
SAF2010-20558, Junta de Andalucia Project P09-TIC-175476 and the FPU Spanish grant,
AP2009-2244
Recognition of Crowd Behavior from Mobile Sensors with Pattern Analysis and Graph Clustering Methods
Mobile on-body sensing has distinct advantages for the analysis and
understanding of crowd dynamics: sensing is not geographically restricted to a
specific instrumented area, mobile phones offer on-body sensing and they are
already deployed on a large scale, and the rich sets of sensors they contain
allows one to characterize the behavior of users through pattern recognition
techniques.
In this paper we present a methodological framework for the machine
recognition of crowd behavior from on-body sensors, such as those in mobile
phones. The recognition of crowd behaviors opens the way to the acquisition of
large-scale datasets for the analysis and understanding of crowd dynamics. It
has also practical safety applications by providing improved crowd situational
awareness in cases of emergency.
The framework comprises: behavioral recognition with the user's mobile
device, pairwise analyses of the activity relatedness of two users, and graph
clustering in order to uncover globally, which users participate in a given
crowd behavior. We illustrate this framework for the identification of groups
of persons walking, using empirically collected data.
We discuss the challenges and research avenues for theoretical and applied
mathematics arising from the mobile sensing of crowd behaviors
Worker Activity Recognition in Smart Manufacturing Using IMU and sEMG Signals with Convolutional Neural Networks
Event-Based Activity Tracking in Work
Wearable computers aim to empower people by providing relevant information at appropriate time. This context-based information delivery helps to perform intricate, tedious or critical tasks and improves productivity, decreases error rates, and thus results in a reduction of labor cost
Fusion ofString-Matched Templates for Continuous Activity Recognition
Abstract to dynamic time warping ([2],44]) but avoids costly arithmetic operations. Strings as an abstraction ofmotion have also been This paper describes a new methodfor continuous activ- used in [7]. ity recognition based onfusion ofstring-matched activity templates. The underlying segmentation and spotting approach Paper Contributions In previous results ([11]), we showed is carried out on several symbol streams in parallel. These that string matching is a promising approach to segmentastreams represent motion trajectories ofbody limbs in Carte- tion of motion data. Here we investigate the fusion of sevsian space, acquiredfrom body-worn inertial sensors. eral segmentations which are found using string matching on First results ofour method in a highly complex real-world motion trajectories ofwrists and elbows. In particular, this paapplication are presented. 8 subjects performed 3800 activ- per presents the following contributions to the field of activity ity instances ofa checking procedure in car assembly adding recognition: up to 480 minutes of recordings. Selecting 6 activity classes with 468 occurrencesforfirst investigations, we obtained an 1. Based on a recent motion segmentation approach, we accuracy ofup to 87%for the user-dependent case. present a way to combine several parallel string matching based spotting results called spottingfusion. This allow
G.: Gestures are strings: Efficient online gesture spotting and classification using string matching
Context awareness is one mechanism that allows wearable computers to provide information proactively, unobtrusively and with minimal user disturbance. Gestures and activities are an important aspect of the user’s context. Detection and classification of gestures may be computationally expensive for low-power, miniaturized wearable platforms, such as those that may be integrated into garments. In this paper we introduce a novel method for online and real-time spotting and classification of gestures. Continuous user motion, acquired from a body-worn network of inertial sensors, is represented by strings of symbols encoding motion vectors. Fast string matching techniques, inspired from bioinformatics, spot trained gestures and classify them. Robustness to gesture variability is provided by approximate matching efficiently implemented through dynamic programming. Our method is successfully demonstrated by spotting and classifying the occurrences of trained gestures within a continuous recording of a complex bicycle maintenance task. It executes in real-time on a desktop computer with a fraction of CPU time. Only simple integer arithmetic operations are required, which makes this method ideally suited for implementation on body-worn sensor nodes and real-time operation
- …