60,042 research outputs found
Towards a Practical Pedestrian Distraction Detection Framework using Wearables
Pedestrian safety continues to be a significant concern in urban communities
and pedestrian distraction is emerging as one of the main causes of grave and
fatal accidents involving pedestrians. The advent of sophisticated mobile and
wearable devices, equipped with high-precision on-board sensors capable of
measuring fine-grained user movements and context, provides a tremendous
opportunity for designing effective pedestrian safety systems and applications.
Accurate and efficient recognition of pedestrian distractions in real-time
given the memory, computation and communication limitations of these devices,
however, remains the key technical challenge in the design of such systems.
Earlier research efforts in pedestrian distraction detection using data
available from mobile and wearable devices have primarily focused only on
achieving high detection accuracy, resulting in designs that are either
resource intensive and unsuitable for implementation on mainstream mobile
devices, or computationally slow and not useful for real-time pedestrian safety
applications, or require specialized hardware and less likely to be adopted by
most users. In the quest for a pedestrian safety system that achieves a
favorable balance between computational efficiency, detection accuracy, and
energy consumption, this paper makes the following main contributions: (i)
design of a novel complex activity recognition framework which employs motion
data available from users' mobile and wearable devices and a lightweight
frequency matching approach to accurately and efficiently recognize complex
distraction related activities, and (ii) a comprehensive comparative evaluation
of the proposed framework with well-known complex activity recognition
techniques in the literature with the help of data collected from human subject
pedestrians and prototype implementations on commercially-available mobile and
wearable devices
Human activity recognition using wearable sensors: a deep learning approach
In the past decades, Human Activity Recognition (HAR) grabbed considerable research attentions from a wide range of pattern recognition and human–computer interaction researchers due to its prominent applications such as smart home health care. The wealth of information requires efficient classification and analysis methods. Deep learning represents a promising technique for large-scale data analytics. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among them, physical human activity recognition through wearable sensors provides valuable information about an individual’s degree of functional ability and lifestyle. There is abundant research that works upon real time processing and causes more power consumption of mobile devices. Mobile phones are resource-limited devices. It is a thought-provoking task to implement and evaluate different recognition systems on mobile devices.
This work proposes a Deep Belief Network (DBN) model for successful human activity recognition. Various experiments are performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN performs competitively in comparison with other algorithms and achieves satisfactory activity recognition performance. Some open problems and ideas are also presented and should be investigated as future research
Discrete techniques applied to low-energy mobile human activity recognition. A new approach
Human activity recognition systems are currently implemented by hundreds of applications and, in
recent years, several technology manufacturers have introduced new wearable devices for this purpose.
Battery consumption constitutes a critical point in these systems since most are provided with a
rechargeable battery. In this paper, by using discrete techniques based on the Ameva algorithm, an innovative
approach for human activity recognition systems on mobile devices is presented. Furthermore,
unlike other systems in current use, this proposal enables recognition of high granularity activities by
using accelerometer sensors. Hence, the accuracy of activity recognition systems can be increased without
sacrificing efficiency. A comparative is carried out between the proposed approach and an approach
based on the well-known neural networks.Junta de Andalucia Simon TIC-805
Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks
With the widespread use of embedded sensing capabilities of mobile devices, there has
been unprecedented development of context-aware solutions. This allows the proliferation of
various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent
personalized services, etc. However, activity context recognition based on multivariate time series
signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance
class problems. This means that recognition models tend to predict classes with the majority number
of samples whilst ignoring classes with the least number of samples, resulting in poor
generalization. To address this problem, we propose augmentation of the time series signals from
inertial sensors with signals from ambient sensing to train deep convolutional neural network
(DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale
invariance of these combined sensor signals. Consequently, we developed a DCNN model using
only inertial sensor signals and then developed another model that combined signals from both
inertial and ambient sensors aiming to investigate the class imbalance problem by improving the
performance of the recognition model. Evaluation and analysis of the proposed system using data
with imbalanced classes show that the system achieved better recognition accuracy when data from
inertial sensors are combined with those from ambient sensors, such as environmental noise level
and illumination, with an overall improvement of 5.3% accuracy
Fusion of non-visual and visual sensors for human tracking
Human tracking is an extensively researched yet still challenging area in the Computer Vision field, with a wide range of applications such as surveillance and healthcare. People may not be successfully tracked with merely the visual information in challenging cases such as long-term occlusion. Thus, we propose to combine information from other sensors with the surveillance cameras to persistently localize and track humans, which is becoming more promising with the pervasiveness of mobile devices such as cellphones, smart watches and smart glasses embedded with all kinds of sensors including accelerometers, gyroscopes, magnetometers, GPS, WiFi modules and so on. In this thesis, we firstly investigate the application of Inertial Measurement Unit (IMU) from mobile devices to human activity recognition and human tracking, we then develop novel persistent human tracking and indoor localization algorithms by the fusion of non-visual sensors and visual sensors, which not only overcomes the occlusion challenge in visual tracking, but also alleviates the calibration and drift problems in IMU tracking --Abstract, page iii
Deep Sensing: Inertial and Ambient Sensing for Activity Context Recognition using Deep Convolutional Neural Networks
With the widespread use of embedded sensing capabilities of mobile devices, there has been unprecedented development of context-aware solutions. This allows the proliferation of various intelligent applications, such as those for remote health and lifestyle monitoring, intelligent personalized services, etc. However, activity context recognition based on multivariate time series signals obtained from mobile devices in unconstrained conditions is naturally prone to imbalance class problems. This means that recognition models tend to predict classes with the majority number of samples whilst ignoring classes with the least number of samples, resulting in poor generalization. To address this problem, we propose augmentation of the time series signals from inertial sensors with signals from ambient sensing to train deep convolutional neural network (DCNNs) models. DCNNs provide the characteristics that capture local dependency and scale invariance of these combined sensor signals. Consequently, we developed a DCNN model using only inertial sensor signals and then developed another model that combined signals from both inertial and ambient sensors aiming to investigate the class imbalance problem by improving the performance of the recognition model. Evaluation and analysis of the proposed system using data with imbalanced classes show that the system achieved better recognition accuracy when data from inertial sensors are combined with those from ambient sensors, such as environmental noise level and illumination, with an overall improvement of 5.3% accuracy
Towards a fuzzy-based multi-classifier selection module for activity recognition applications
Performing activity recognition using the information provided by the different sensors embedded in a smartphone face limitations due to the capabilities of those devices when the computations are carried out in the terminal. In this work a fuzzy inference module is implemented in order to decide which classifier is the most appropriate to be used at a specific moment regarding the application requirements and the device context characterized by its battery level, available memory and CPU load. The set of classifiers that is considered is composed of Decision Tables and Trees that have been trained using different number of sensors and features. In addition, some classifiers perform activity recognition regardless of the on-body device position and others rely on the previous recognition of that position to use a classifier that is trained with measurements gathered with the mobile placed on that specific position. The modules implemented show that an evaluation of the classifiers allows sorting them so the fuzzy inference module can choose periodically the one that best suits the device context and application requirements
- …