18,474 research outputs found
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist
In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91â99% for healthy subjects and 70â85% for stroke patients
Is the timed-up and go test feasible in mobile devices? A systematic review
The number of older adults is increasing worldwide, and it is expected that by 2050 over 2 billion individuals will be more than 60 years old. Older adults are exposed to numerous pathological problems such as Parkinsonâs disease, amyotrophic lateral sclerosis, post-stroke, and orthopedic disturbances. Several physiotherapy methods that involve measurement of movements, such as the Timed-Up and Go test, can be done to support efficient and effective evaluation of pathological symptoms and promotion of health and well-being. In this systematic review, the authors aim to determine how the inertial sensors embedded in mobile devices are employed for the measurement of the different parameters involved in the Timed-Up and Go test. The main contribution of this paper consists of the identification of the different studies that utilize the sensors available in mobile devices for the measurement of the results of the Timed-Up and Go test. The results show that mobile devices embedded motion sensors can be used for these types of studies and the most commonly used sensors are the magnetometer, accelerometer, and gyroscope available in off-the-shelf smartphones. The features analyzed in this paper are categorized as quantitative, quantitative + statistic, dynamic balance, gait properties, state transitions, and raw statistics. These features utilize the accelerometer and gyroscope sensors and facilitate recognition of daily activities, accidents such as falling, some diseases, as well as the measurement of the subject's performance during the test execution.info:eu-repo/semantics/publishedVersio
EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition
Human activity recognition using multiple sensors is a challenging but
promising task in recent decades. In this paper, we propose a deep multimodal
fusion model for activity recognition based on the recently proposed feature
fusion architecture named EmbraceNet. Our model processes each sensor data
independently, combines the features with the EmbraceNet architecture, and
post-processes the fused feature to predict the activity. In addition, we
propose additional processes to boost the performance of our model. We submit
the results obtained from our proposed model to the SHL recognition challenge
with the team name "Yonsei-MCML."Comment: Accepted in HASCA at ACM UbiComp/ISWC 2019, won the 2nd place in the
SHL Recognition Challenge 201
- âŠ