4,615 research outputs found
Detection of bimanual gestures everywhere: why it matters, what we need and what is missing
Bimanual gestures are of the utmost importance for the study of motor
coordination in humans and in everyday activities. A reliable detection of
bimanual gestures in unconstrained environments is fundamental for their
clinical study and to assess common activities of daily living. This paper
investigates techniques for a reliable, unconstrained detection and
classification of bimanual gestures. It assumes the availability of inertial
data originating from the two hands/arms, builds upon a previously developed
technique for gesture modelling based on Gaussian Mixture Modelling (GMM) and
Gaussian Mixture Regression (GMR), and compares different modelling and
classification techniques, which are based on a number of assumptions inspired
by literature about how bimanual gestures are represented and modelled in the
brain. Experiments show results related to 5 everyday bimanual activities,
which have been selected on the basis of three main parameters: (not)
constraining the two hands by a physical tool, (not) requiring a specific
sequence of single-hand gestures, being recursive (or not). In the best
performing combination of modeling approach and classification technique, five
out of five activities are recognized up to an accuracy of 97%, a precision of
82% and a level of recall of 100%.Comment: Submitted to Robotics and Autonomous Systems (Elsevier
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Assessment of Hand Gestures Using Wearable Sensors and Fuzzy Logic
Hand dexterity and motor control are critical in our everyday lives because a significant portion of the daily motions we perform are with our hands and require some degree of repetition and skill. Therefore, development of technologies for hand and extremity rehabilitation is a significant area of research that will directly help patients recovering from hand debilities sustained from causes ranging from stroke and Parkinson’s disease to trauma and common injuries. Cyclic activity recognition and assessment is appropriate for hand and extremity rehabilitation because a majority of our essential motions are cyclic in their nature. For a patient on the road to regaining functional independence with daily skills, the improvement in cyclic motions constitutes an important and quantifiable rehabilitation goal. However, challenges exist with hand rehabilitation sensor technologies preventing acquisition of long-term, continuous, accurate and actionable motion data. These challenges include complicated and uncomfortable system assemblies, and a lack of integration with consumer electronics for easy readout. In our research, we have developed a glove based system where the inertial measurement unit (IMU) sensors are used synergistically with the flexible sensors to minimize the number of IMU sensors. The classification capability of our system is improved by utilizing a fuzzy logic data analysis algorithm. We tested a total of 25 different subjects using a glove-based apparatus to gather data on two-dimensional motions with one accelerometer and three-dimensional motions with one accelerometer and two flexible sensors. Our research provides an approach that has the potential to utilize both activity recognition and activity assessment using simple sensor systems to help patients recover and improve their overall quality of life
Review of Wearable Devices and Data Collection Considerations for Connected Health
Wearable sensor technology has gradually extended its usability into a wide range of well-known applications. Wearable sensors can typically assess and quantify the wearer’s physiology and are commonly employed for human activity detection and quantified self-assessment. Wearable sensors are increasingly utilised to monitor patient health, rapidly assist with disease diagnosis, and help predict and often improve patient outcomes. Clinicians use various self-report questionnaires and well-known tests to report patient symptoms and assess their functional ability. These assessments are time consuming and costly and depend on subjective patient recall. Moreover, measurements may not accurately demonstrate the patient’s functional ability whilst at home. Wearable sensors can be used to detect and quantify specific movements in different applications. The volume of data collected by wearable sensors during long-term assessment of ambulatory movement can become immense in tuple size. This paper discusses current techniques used to track and record various human body movements, as well as techniques used to measure activity and sleep from long-term data collected by wearable technology devices
- …