7 research outputs found

    Semi-autonomous Navigation of a Robotic Wheelchair

    No full text
    The present work considers the development of a wheelchair for people with special needs, which is capable of navigating semi-autonomously within its workspace. Such a system is expected to prove useful to people with impaired mobility, who may have limited fine motor control of the upper extremities. Among the implemented behaviors of this robotic system are the avoidance of obstacles, the motion in the middle of the free space and the following of a moving target specified by the user (e.g. follow a person walking in front of the wheelchair). The wheelchair is equipped with sonars, which are used for distance measurement in preselected critical directions, and with a panoramic camera (with a 360 degree field of view), which is used for following a moving target. After suitably processing the color sequence of the panoramic images using the color histogram of the desired target, the orientation of the target with respect to the wheelchair is determined, while its distance is determined by the sonars. The motion control laws developed for the system use the sensory data and take into account the nonholonomic kinematic constraints of the wheelchair, in order to guarantee certain desired features of the closed--loop system, such as stability, while preserving their simplicity, for ease in implementation. An experimental prototype has been developed at ICS--FORTH, based on a commercially--available wheelchair, where the sensors, the computing power and the electronics needed for the implementation of the navigation behaviors and of the user interfaces (touch screen, voice commands) were developed as add--on modules

    Template-Based Hand Pose Recognition Using Multiple Cues

    No full text
    Abstract. This paper presents a practical method for hypothesizinghand locations and subsequently recognizing a discrete number of poses in image sequences. In a typical setting the user is gesturing in front of a single camera and interactively performing gesture input with one hand. The approach is to identify likely hand locations in the image based on discriminative features of colour and motion. A set of exemplar templates is stored in memory and a nearest neighbour classifier is then used for hypothesis verification and pose estimation. The performance of the method is demonstrated on a number of example sequences, including recognition of static hand gestures and a navigation by pointing application.

    Navigational Support for Robotic Wheelchair Platforms: An Approach that Combines Vision and Range Sensors

    No full text
    An approach towards providing advanced navigational support to robotic wheelchair platforms is presented in this paper. Contemporary methods that are employed in robotic wheelchairs are based on the information provided by range sensors and its appropriate exploitation by means of obstacle avoidance techniques. However, since range sensors cannot support a detailed environment representation, these methods fail to provide advanced navigational assistance, unless the environment is appropriately regulated (e.g. with the introduction of beacons). In order to avoid any modifications to the environment, we propose an alternative approach that employs computer vision techniques which facilitate space perception and navigation. Computer vision has not been introduced todate in rehabilitation robotics, since the former is not mature enough to meet the needs of this sensitive application. However, in the proposed approach, stable techniques are exploited that facilitate reliable, automatic nav..

    Extreme Learning Machine Based Hand Posture Recognition in Color-Depth Image

    No full text

    Modeling Paired Objects and Their Interaction

    No full text
    Object categorization and human action recognition are two important capabilities for an intelligent robot. Traditionally, they are treated separately. Recently, more researchers started to model the object features, object affordance, and human action at the same time. Most of the works build a relation model between single object features and human action or object affordance and uses the models to improve object recognition accuracies
    corecore