7 research outputs found

    Vision-Based Shared Control for a BCI Wheelchair

    Get PDF
    Brain-actuated wheelchairs offer paraplegics the potential to gain a degree of independence in performing activities of daily living. It is not currently possible to achieve precise proportional control of devices using the low resolution output of a brain-computer interface (BCI). Consequently, we have developed a shared control system that interprets such commands, given the context of the surroundings. In this paper we show that a vision system provides sufficiently reliable information to the shared controller, to enable synthesized BCI subjects to drive safely in an office environment. The shared controller reduces both the time and number of commands required to perform a task

    Vision Based Obstacle Avoidance Techniques

    Get PDF

    Brain-Controlled Wheelchairs: A Robotic Architecture

    Get PDF
    Independent mobility is core to being able to perform activities of daily living by oneself. However, powered wheelchairs are not an option for a large number of people who are unable to use conventional interfaces, due to severe motor–disabilities. Non-invasive brain–computer interfaces (BCIs) offer a promising solution to this interaction problem and in this article we present a shared control architecture that couples the intelligence and desires of the user with the precision of a powered wheelchair. We show how four healthy subjects are able to master control of the wheelchair using an asynchronous motor–imagery based BCI protocol and how this results in a higher overall task performance, compared with alternative synchronous P300–based approaches

    Feature recognition and obstacle detection for drive assistance in indoor environments

    Get PDF
    The goal of this research project was to develop a robust feature recognition and obstacle detection method for smart wheelchair navigation in indoor environments. As two types of depth sensors were employed, two different methods were proposed and implemented in this thesis. The two methods combined information of colour, edge, depth and motion to detect obstacles, compute movements and recognize indoor room features. The first method was based on a stereo vision sensor and started with optimizing the noisy disparity images, then, RANSAC was used to estimate the ground plane, followed by a watershed based image segmentation algorithm for ground pixel classification. Meanwhile, a novel algorithm named a standard deviation ridge straight line detector was performed to extract straight lines from the RGB images. The algorithm is able to provide more useful information than using the Canny edge detector and the Hough Transform. Then, the novel drop-off detection and stairs-up detection algorithms based on the proposed straight line detector were carried out. Moreover, the camera movements were calculated by optical flow. The second method was based on a structured light sensor. After RANSAC ground plane estimation, morphology operations were applied to smooth the ground surface area. Then, an obstacle detection algorithm was carried out to create a top-down map of the ground plane using inverse perspective mapping and segment obstacles using a region growing-based algorithm. Both the drop-off and open door detection algorithms employ the straight lines extracted from depth discontinuities maps. The performance and accuracy of the two proposed methods were evaluated. Results show that the ground plane classification using the first method achieved 98.58% true positives, and the figure improved with the second method to 99%. The drop-off detection algorithms using the first method also achieved good results, with no false negatives found in the test video sequences. The system provided the top-down maps of the surroundings to detect and segment obstacles correctly. Overall, the results showing accurate distances to various detected indoor features and obstacles, suggests that this proposed colour/edge/motion/depth approach would be useful as a navigation aid through doorways and hallways

    Region Classification for Robust Floor Detection in Indoor Environments

    No full text

    Mobile robot vavigation using a vision based approach

    Get PDF
    PhD ThesisThis study addresses the issue of vision based mobile robot navigation in a partially cluttered indoor environment using a mapless navigation strategy. The work focuses on two key problems, namely vision based obstacle avoidance and vision based reactive navigation strategy. The estimation of optical flow plays a key role in vision based obstacle avoidance problems, however the current view is that this technique is too sensitive to noise and distortion under real conditions. Accordingly, practical applications in real time robotics remain scarce. This dissertation presents a novel methodology for vision based obstacle avoidance, using a hybrid architecture. This integrates an appearance-based obstacle detection method into an optical flow architecture based upon a behavioural control strategy that includes a new arbitration module. This enhances the overall performance of conventional optical flow based navigation systems, enabling a robot to successfully move around without experiencing collisions. Behaviour based approaches have become the dominant methodologies for designing control strategies for robot navigation. Two different behaviour based navigation architectures have been proposed for the second problem, using monocular vision as the primary sensor and equipped with a 2-D range finder. Both utilize an accelerated version of the Scale Invariant Feature Transform (SIFT) algorithm. The first architecture employs a qualitative-based control algorithm to steer the robot towards a goal whilst avoiding obstacles, whereas the second employs an intelligent control framework. This allows the components of soft computing to be integrated into the proposed SIFT-based navigation architecture, conserving the same set of behaviours and system structure of the previously defined architecture. The intelligent framework incorporates a novel distance estimation technique using the scale parameters obtained from the SIFT algorithm. The technique employs scale parameters and a corresponding zooming factor as inputs to train a neural network which results in the determination of physical distance. Furthermore a fuzzy controller is designed and integrated into this framework so as to estimate linear velocity, and a neural network based solution is adopted to estimate the steering direction of the robot. As a result, this intelligent iv approach allows the robot to successfully complete its task in a smooth and robust manner without experiencing collision. MS Robotics Studio software was used to simulate the systems, and a modified Pioneer 3-DX mobile robot was used for real-time implementation. Several realistic scenarios were developed and comprehensive experiments conducted to evaluate the performance of the proposed navigation systems. KEY WORDS: Mobile robot navigation using vision, Mapless navigation, Mobile robot architecture, Distance estimation, Vision for obstacle avoidance, Scale Invariant Feature Transforms, Intelligent framework
    corecore