24 research outputs found
Vibrotactile Feedback for Brain-Computer Interface Operation
To be correctly mastered, brain-computer interfaces (BCIs) need an uninterrupted flow
of feedback to the user. This feedback is usually delivered through the visual channel.
Our aim was to explore the benefits of vibrotactile feedback during users' training
and control of EEG-based BCI applications. A protocol for delivering vibrotactile feedback,
including specific hardware and software arrangements, was specified. In three studies
with 33 subjects (including 3 with spinal cord injury), we compared vibrotactile and visual
feedback, addressing: (I) the feasibility of subjects' training to master their EEG rhythms
using tactile feedback; (II) the compatibility of this form of feedback in presence of a visual
distracter; (III) the performance in presence of a complex visual task on the same (visual)
or different (tactile) sensory channel. The stimulation protocol we developed supports a general
usage of the tactors; preliminary experimentations. All studies indicated that the vibrotactile channel
can function as a valuable feedback modality with reliability comparable to the classical visual
feedback. Advantages of using a vibrotactile feedback emerged when the visual channel was
highly loaded by a complex task. In all experiments, vibrotactile feedback felt, after some training,
more natural for both controls and SCI users
Adaptive Steering Behaviour Modelling for Power Wheelchair Control (Adaptieve modellering van stuurgedrag voor elektrische rolstoelen)
status: publishe
Feature based omnidirectional sparse visual path following
Vision sensors are attractive for autonomous robots because they are a rich source of environment information. The main challenge in using images for mobile robots is managing this wealth of information. A relatively recent approach is the use of fast wide baseline local features, which we developed and used in the novel approach to sparse visual
path following described in this paper.
These local features have the great advantage that they can
be recognized even if the viewpoint differs significantly. This opens the door to a memory efficient description of a path by descriptors of sparse images. We propose a method for re-execution of these paths by a series of visual homing operations which yield a navigation method with unique properties: it is accurate, robust, fast, and without odometry error build-up.Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Feature based omnidirectional sparse visual path following'', Proceedings IEEE/RSJ international conference on intelligent robots and systems - IROS2005, pp. 1003-1008, August 2-6, 2005, Edmonton, Alberta, Canada.status: publishe
Omnidirectional sparse visual path following with occlusion-robust feature tracking
Omnidirectional vision sensors are very attractive for autonomous robots because they offer a rich source of
environment information. The main challenge in using these
for mobile robots is managing this wealth of information. A
relatively recent approach is the use of fast wide baseline local features, which we developed and use in the novel sparse visual path following method described in this paper.
These local features have the great advantage that they can
be recognized even if the viewpoint differs significantly. This opens the door to a memory efficient description of a path by sparsely sampling it with images. We propose a method for re-execution of these paths by a series of visual homing operations.
Motion estimation is done by simultaneously tracking the set of features, with recovery of lost features by backprojecting them from a local sparse 3D feature map. This yields a navigation method with unique properties: it is accurate, robust, fast, and without odometry error build-up.Goedemé T., Tuytelaars T., Van Gool L., Vanacker G., Nuttin M., ''Omnidirectional sparse visual path following with occlusion-robust feature tracking'', Proceedings 6th workshop on omnidirectional vision, camera networks and non-classical cameras, 8 pp., October 21, 2005, Beijing, China.status: publishe
Global dynamic window approach for holonic and non-holonic mobile robots with arbitrary cross-section
This paper presents an extension of current Global Dynamic Window approaches to holonomic and non-holonomic mobile robots with an arbitrary cross-section. The algorithm proceeds in two stages. In order to account for an arbitrary robot footprint, the first stage takes the robot's orientation explicitly into account by constructing a navigation function in the (x, y, θ) configuration space. In a second stage, an admissible velocity is chosen from a window around the robot's current velocity, which contains all velocities that can be reached under the acceleration constraints. Fast computation over large areas is achieved by adopting multi-resolution (x, y) and (x, y, θ) planning. Several measures are taken to obtain safe and robust robot behaviour. Experimental results on our wheelchair test platform show the feasibility of the approach. © 2005 IEEE.status: publishe
User-adapted plan recognition and user-adapted shared control: A Bayesian approach to semi-autonomous wheelchair driving
Many elderly and physically impaired people experience difficulties when maneuvering a powered wheelchair. In order to ease maneuvering, powered wheelchairs have been equipped with sensors, additional computing power and intelligence by various research groups. This paper presents a Bayesian approach to maneuvering assistance for wheelchair driving, which can be adapted to a specific user. The proposed framework is able to model and estimate even complex user intents, i.e. wheelchair maneuvers that the driver has in mind. Furthermore, it explicitly takes the uncertainty on the user's intent into account. Besides during intent estimation, user-specific properties and uncertainty on the user's intent are incorporated when taking assistive actions, such that assistance is tailored to the user's driving skills. This decision making is modeled as a greedy Partially Observable Markov Decision Process (POMDP). Benefits of this approach are shown using experimental results in simulation and on our wheelchair platform Sharioto. © 2007 Springer Science+Business Media, LLC.status: publishe
Bayesian plan recognition and shared control under uncertainty assisting wheelchair drives by tracking fine motion paths
The last years have witnessed a significant increase in the percentage of old and disabled people. Members of this population group very often require extensive help for performing daily tasks like moving around or grasping objects. Unfortunately, assistive technology is not always available to people needing it. For instance, steering a wheelchair can represent an extremely fatiguing or simply impossible task to many elderly or disabled users. Most of the existing assistance platforms try to help users without considering their specific needs. However, driving performance may vary considerably across users due to different pathologies or just due to temporary effects like fatigue. Therefore, we propose in this paper a user adapted shared control approach aimed at helping users in driving a power wheelchair. Adaption to the user is achieved by estimating the user's true intent out of potentially noisy steering signals before assisting him/her. The user's driving performance is explicitly modeled in order to recognize the user's intention or plan together with the uncertainty on it. Safe navigation is achieved by merging the potentially noisy input of the user with fine motion trajectories computed online by a 3D planner. Encouraging results on assisting a user who cannot steer to the left are reported on K.U.Leuven's intelligent wheelchair Sharioto. ©2007 IEEE.status: publishe
