4,414 research outputs found
Functional Electrical Stimulation mediated by Iterative Learning Control and 3D robotics reduces motor impairment in chronic stroke
Background: Novel stroke rehabilitation techniques that employ electrical stimulation (ES) and robotic technologies are effective in reducing upper limb impairments. ES is most effective when it is applied to support the patientsâ voluntary effort; however, current systems fail to fully exploit this connection. This study builds on previous work using advanced ES controllers, and aims to investigate the feasibility of Stimulation Assistance through Iterative Learning (SAIL), a novel upper limb stroke rehabilitation system which utilises robotic support, ES, and voluntary effort. Methods: Five hemiparetic, chronic stroke participants with impaired upper limb function attended 18, 1 hour intervention sessions. Participants completed virtual reality tracking tasks whereby they moved their impaired arm to follow a slowly moving sphere along a specified trajectory. To do this, the participantsâ arm was supported by a robot. ES, mediated by advanced iterative learning control (ILC) algorithms, was applied to the triceps and anterior deltoid muscles. Each movement was repeated 6 times and ILC adjusted the amount of stimulation applied on each trial to improve accuracy and maximise voluntary effort. Participants completed clinical assessments (Fugl-Meyer, Action Research Arm Test) at baseline and post-intervention, as well as unassisted tracking tasks at the beginning and end of each intervention session. Data were analysed using t-tests and linear regression. Results: From baseline to post-intervention, Fugl-Meyer scores improved, assisted and unassisted tracking performance improved, and the amount of ES required to assist tracking reduced. Conclusions: The concept of minimising support from ES using ILC algorithms was demonstrated. The positive results are promising with respect to reducing upper limb impairments following stroke, however, a larger study is required to confirm this
Planar Object Tracking in the Wild: A Benchmark
Planar object tracking is an actively studied problem in vision-based robotic
applications. While several benchmarks have been constructed for evaluating
state-of-the-art algorithms, there is a lack of video sequences captured in the
wild rather than in constrained laboratory environment. In this paper, we
present a carefully designed planar object tracking benchmark containing 210
videos of 30 planar objects sampled in the natural environment. In particular,
for each object, we shoot seven videos involving various challenging factors,
namely scale change, rotation, perspective distortion, motion blur, occlusion,
out-of-view, and unconstrained. The ground truth is carefully annotated
semi-manually to ensure the quality. Moreover, eleven state-of-the-art
algorithms are evaluated on the benchmark using two evaluation metrics, with
detailed analysis provided for the evaluation results. We expect the proposed
benchmark to benefit future studies on planar object tracking.Comment: Accepted by ICRA 201
Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age
Simultaneous Localization and Mapping (SLAM)consists in the concurrent
construction of a model of the environment (the map), and the estimation of the
state of the robot moving within it. The SLAM community has made astonishing
progress over the last 30 years, enabling large-scale real-world applications,
and witnessing a steady transition of this technology to industry. We survey
the current state of SLAM. We start by presenting what is now the de-facto
standard formulation for SLAM. We then review related work, covering a broad
set of topics including robustness and scalability in long-term mapping, metric
and semantic representations for mapping, theoretical performance guarantees,
active SLAM and exploration, and other new frontiers. This paper simultaneously
serves as a position paper and tutorial to those who are users of SLAM. By
looking at the published research with a critical eye, we delineate open
challenges and new research issues, that still deserve careful scientific
investigation. The paper also contains the authors' take on two questions that
often animate discussions during robotics conferences: Do robots need SLAM? and
Is SLAM solved
Human Like Adaptation of Force and Impedance in Stable and Unstable Tasks
AbstractâThis paper presents a novel human-like learning con-troller to interact with unknown environments. Strictly derived from the minimization of instability, motion error, and effort, the controller compensates for the disturbance in the environment in interaction tasks by adapting feedforward force and impedance. In contrast with conventional learning controllers, the new controller can deal with unstable situations that are typical of tool use and gradually acquire a desired stability margin. Simulations show that this controller is a good model of human motor adaptation. Robotic implementations further demonstrate its capabilities to optimally adapt interaction with dynamic environments and humans in joint torque controlled robots and variable impedance actuators, with-out requiring interaction force sensing. Index TermsâFeedforward force, human motor control, impedance, robotic control. I
Episodic Learning with Control Lyapunov Functions for Uncertain Robotic Systems
Many modern nonlinear control methods aim to endow systems with guaranteed
properties, such as stability or safety, and have been successfully applied to
the domain of robotics. However, model uncertainty remains a persistent
challenge, weakening theoretical guarantees and causing implementation failures
on physical systems. This paper develops a machine learning framework centered
around Control Lyapunov Functions (CLFs) to adapt to parametric uncertainty and
unmodeled dynamics in general robotic systems. Our proposed method proceeds by
iteratively updating estimates of Lyapunov function derivatives and improving
controllers, ultimately yielding a stabilizing quadratic program model-based
controller. We validate our approach on a planar Segway simulation,
demonstrating substantial performance improvements by iteratively refining on a
base model-free controller
Learning Task Priorities from Demonstrations
Bimanual operations in humanoids offer the possibility to carry out more than
one manipulation task at the same time, which in turn introduces the problem of
task prioritization. We address this problem from a learning from demonstration
perspective, by extending the Task-Parameterized Gaussian Mixture Model
(TP-GMM) to Jacobian and null space structures. The proposed approach is tested
on bimanual skills but can be applied in any scenario where the prioritization
between potentially conflicting tasks needs to be learned. We evaluate the
proposed framework in: two different tasks with humanoids requiring the
learning of priorities and a loco-manipulation scenario, showing that the
approach can be exploited to learn the prioritization of multiple tasks in
parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic
- âŠ