1,755 research outputs found
An adaptive scheme for wheelchair navigation collaborative control
In this paper we propose a system where machine and human cooperate at every situation via a reactive emergent behavior, so that the person is always in charge of his/her own motion. Our approach relies on locally evaluating the performance of the human and the wheelchair for each given situation. Then, both their motion commands are weighted according to those efficiencies
and combined in a reactive way. This approach
benefits from the advantages of typical reactive behaviors to combine different sources of information in a simple, seamless way into an emergent trajectory.Peer ReviewedPostprint (authorâs final draft
Collaborative Control for a Robotic Wheelchair: Evaluation of Performance, Attention, and Workload
Powered wheelchair users often struggle to drive safely and effectively and in more critical cases can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists the user as and when they require help. The system uses a multipleâhypotheses method to predict the driverâs intentions and if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance, but, perhaps more importantly, we characterise the user performance, in an experiment that combines eyeâtracking with a secondary task. Without assistance, participants experienced multiple collisions whilst driving around the predeïŹned route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely, but they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brainâmachine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input
Assistive Planning in Complex, Dynamic Environments: a Probabilistic Approach
We explore the probabilistic foundations of shared control in complex dynamic
environments. In order to do this, we formulate shared control as a random
process and describe the joint distribution that governs its behavior. For
tractability, we model the relationships between the operator, autonomy, and
crowd as an undirected graphical model. Further, we introduce an interaction
function between the operator and the robot, that we call "agreeability"; in
combination with the methods developed in~\cite{trautman-ijrr-2015}, we extend
a cooperative collision avoidance autonomy to shared control. We therefore
quantify the notion of simultaneously optimizing over agreeability (between the
operator and autonomy), and safety and efficiency in crowded environments. We
show that for a particular form of interaction function between the autonomy
and the operator, linear blending is recovered exactly. Additionally, to
recover linear blending, unimodal restrictions must be placed on the models
describing the operator and the autonomy. In turn, these restrictions raise
questions about the flexibility and applicability of the linear blending
framework. Additionally, we present an extension of linear blending called
"operator biased linear trajectory blending" (which formalizes some recent
approaches in linear blending such as~\cite{dragan-ijrr-2013}) and show that
not only is this also a restrictive special case of our probabilistic approach,
but more importantly, is statistically unsound, and thus, mathematically,
unsuitable for implementation. Instead, we suggest a statistically principled
approach that guarantees data is used in a consistent manner, and show how this
alternative approach converges to the full probabilistic framework. We conclude
by proving that, in general, linear blending is suboptimal with respect to the
joint metric of agreeability, safety, and efficiency
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Design requirements and challenges for intelligent power wheelchair use in crowds: learning from expert wheelchair users
An intelligent or smart power wheelchair is normally built on a standard power wheelchair with additional modules for perception, navigation or interaction purposes. It adds autonomy to the wheelchair and provides a technical solution to the safety concerns, thus opening the possibility for people who are considered not suitable to use a standard power wheelchair. Although the research in this field has been going on for decades, most of them focus on dealing with static or simple dynamic environments. In addition, the role of the user is sometimes overlooked during the design and development process.
In our project, we aim to design a user-centred intelligent wheelchair and extend its application area to one of the most difficult scenarios faced by wheelchair usersÂ, ÂÂnavigating among crowds. As we start the process of designing a smart wheelchair, we present the results of an initial study with expert wheelchair users' to gain insights into their design requirements and challenges when navigating in crowds
One-shot assistance estimation from expert demonstrations for a shared control wheelchair system
An emerging research problem in the field of assistive robotics is the design of methodologies that allow robots to provide human-like assistance to the users. Especially within the rehabilitation domain, a grand challenge is to program a robot to mimic the operation of an occupational therapist, intervening with the user when necessary so as to improve the therapeutic power of the assistive robotic system. We propose a method to estimate assistance policies from expert demonstrations to present human-like intervention during navigation in a powered wheelchair setup. For this purpose, we constructed a setting, where a human offers assistance to the user over a haptic shared control system. The robot learns from human assistance demonstrations while the user is actively driving the wheelchair in an unconstrained environment. We train a Gaussian process regression model to learn assistance commands given past and current actions of the user and the state of the environment. The results indicate that the model can estimate human assistance after only a single demonstration, i.e. in one-shot, so that the robot can help the user by selecting the appropriate assistance in a human-like fashion
Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots
This paper shows and evaluates a novel approach to integrate a non-invasive
Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to
mentally drive a telepresence robot. Controlling a mobile device by using human
brain signals might improve the quality of life of people suffering from severe
physical disabilities or elderly people who cannot move anymore. Thus, the BCI
user is able to actively interact with relatives and friends located in
different rooms thanks to a video streaming connection to the robot. To
facilitate the control of the robot via BCI, we explore new ROS-based
algorithms for navigation and obstacle avoidance, making the system safer and
more reliable. In this regard, the robot can exploit two maps of the
environment, one for localization and one for navigation, and both can be used
also by the BCI user to watch the position of the robot while it is moving. As
demonstrated by the experimental results, the user's cognitive workload is
reduced, decreasing the number of commands necessary to complete the task and
helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference
on Robotics and Automatio
- âŠ