7,769 research outputs found
Mobile Quantification and Therapy Course Tracking for Gait Rehabilitation
This paper presents a novel autonomous quality metric to quantify the
rehabilitations progress of subjects with knee/hip operations. The presented
method supports digital analysis of human gait patterns using smartphones. The
algorithm related to the autonomous metric utilizes calibrated acceleration,
gyroscope and magnetometer signals from seven Inertial Measurement Unit
attached on the lower body in order to classify and generate the grading system
values. The developed Android application connects the seven Inertial
Measurement Units via Bluetooth and performs the data acquisition and
processing in real-time. In total nine features per acceleration direction and
lower body joint angle are calculated and extracted in real-time to achieve a
fast feedback to the user. We compare the classification accuracy and
quantification capabilities of Linear Discriminant Analysis, Principal
Component Analysis and Naive Bayes algorithms. The presented system is able to
classify patients and control subjects with an accuracy of up to 100\%. The
outcomes can be saved on the device or transmitted to treating physicians for
later control of the subject's improvements and the efficiency of physiotherapy
treatments in motor rehabilitation. The proposed autonomous quality metric
solution bears great potential to be used and deployed to support digital
healthcare and therapy.Comment: 5 Page
Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High Speed Scenarios
Event cameras are bio-inspired vision sensors that output pixel-level
brightness changes instead of standard intensity frames. These cameras do not
suffer from motion blur and have a very high dynamic range, which enables them
to provide reliable visual information during high speed motions or in scenes
characterized by high dynamic range. However, event cameras output only little
information when the amount of motion is limited, such as in the case of almost
still motion. Conversely, standard cameras provide instant and rich information
about the environment most of the time (in low-speed and good lighting
scenarios), but they fail severely in case of fast motions, or difficult
lighting such as high dynamic range or low light scenes. In this paper, we
present the first state estimation pipeline that leverages the complementary
advantages of these two sensors by fusing in a tightly-coupled manner events,
standard frames, and inertial measurements. We show on the publicly available
Event Camera Dataset that our hybrid pipeline leads to an accuracy improvement
of 130% over event-only pipelines, and 85% over standard-frames-only
visual-inertial systems, while still being computationally tractable.
Furthermore, we use our pipeline to demonstrate - to the best of our knowledge
- the first autonomous quadrotor flight using an event camera for state
estimation, unlocking flight scenarios that were not reachable with traditional
visual-inertial odometry, such as low-light environments and high-dynamic range
scenes.Comment: 8 pages, 9 figures, 2 table
Attitude Estimation and Control Using Linear-Like Complementary Filters: Theory and Experiment
This paper proposes new algorithms for attitude estimation and control based
on fused inertial vector measurements using linear complementary filters
principle. First, n-order direct and passive complementary filters combined
with TRIAD algorithm are proposed to give attitude estimation solutions. These
solutions which are efficient with respect to noise include the gyro bias
estimation. Thereafter, the same principle of data fusion is used to address
the problem of attitude tracking based on inertial vector measurements. Thus,
instead of using noisy raw measurements in the control law a new solution of
control that includes a linear-like complementary filter to deal with the noise
is proposed. The stability analysis of the tracking error dynamics based on
LaSalle's invariance theorem proved that almost all trajectories converge
asymptotically to the desired equilibrium. Experimental results, obtained with
DIY Quad equipped with the APM2.6 auto-pilot, show the effectiveness and the
performance of the proposed solutions.Comment: Submitted for Journal publication on March 09, 2015. Partial results
related to this work have been presented in IEEE-ROBIO-201
Integration of Absolute Orientation Measurements in the KinectFusion Reconstruction pipeline
In this paper, we show how absolute orientation measurements provided by
low-cost but high-fidelity IMU sensors can be integrated into the KinectFusion
pipeline. We show that integration improves both runtime, robustness and
quality of the 3D reconstruction. In particular, we use this orientation data
to seed and regularize the ICP registration technique. We also present a
technique to filter the pairs of 3D matched points based on the distribution of
their distances. This filter is implemented efficiently on the GPU. Estimating
the distribution of the distances helps control the number of iterations
necessary for the convergence of the ICP algorithm. Finally, we show
experimental results that highlight improvements in robustness, a speed-up of
almost 12%, and a gain in tracking quality of 53% for the ATE metric on the
Freiburg benchmark.Comment: CVPR Workshop on Visual Odometry and Computer Vision Applications
Based on Location Clues 201
Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist
In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91â99% for healthy subjects and 70â85% for stroke patients
Unsupervised Contact Learning for Humanoid Estimation and Control
This work presents a method for contact state estimation using fuzzy
clustering to learn contact probability for full, six-dimensional humanoid
contacts. The data required for training is solely from proprioceptive sensors
- endeffector contact wrench sensors and inertial measurement units (IMUs) -
and the method is completely unsupervised. The resulting cluster means are used
to efficiently compute the probability of contact in each of the six
endeffector degrees of freedom (DoFs) independently. This clustering-based
contact probability estimator is validated in a kinematics-based base state
estimator in a simulation environment with realistic added sensor noise for
locomotion over rough, low-friction terrain on which the robot is subject to
foot slip and rotation. The proposed base state estimator which utilizes these
six DoF contact probability estimates is shown to perform considerably better
than that which determines kinematic contact constraints purely based on
measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 201
Unsupervised Contact Learning for Humanoid Estimation and Control
This work presents a method for contact state estimation using fuzzy
clustering to learn contact probability for full, six-dimensional humanoid
contacts. The data required for training is solely from proprioceptive sensors
- endeffector contact wrench sensors and inertial measurement units (IMUs) -
and the method is completely unsupervised. The resulting cluster means are used
to efficiently compute the probability of contact in each of the six
endeffector degrees of freedom (DoFs) independently. This clustering-based
contact probability estimator is validated in a kinematics-based base state
estimator in a simulation environment with realistic added sensor noise for
locomotion over rough, low-friction terrain on which the robot is subject to
foot slip and rotation. The proposed base state estimator which utilizes these
six DoF contact probability estimates is shown to perform considerably better
than that which determines kinematic contact constraints purely based on
measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and
Automation (ICRA) 201
- âŠ