48 research outputs found

    Corridor following wheelchair by visual servoing

    Get PDF
    International audienceIn this paper, we present an autonomous naviga- tion framework of a wheelchair by means of a single camera and visual servoing. We focus on a corridor following task where no prior knowledge of the environment is required. Our approach embeds an image-based controller, thus avoiding to estimate the pose of the wheelchair. The servoing process matches the non holonomous constraints of the wheelchair and relies on two visual features, namely the vanishing point location and the orientation of the median line formed by the straight lines related to the bottom of the walls. This overcomes the process initialization issue typically raised in the literature. The control scheme has been implemented onto a robotized wheelchair and results show that it can follow a corridor with an accuracy of ±3cm

    Real-time GP-based wheelchair corridor following

    Get PDF
    In this paper, we present a novel GP-based visual controller. The HOG features are used as a global representation of the observed image. The Gaussian Processes (GP) algorithm is trained to learn the mapping from the HOG feature vector onto the velocity variables. The GP training is achieved using corridor images collected from different places, these images are labeled using velocity values generated by a geometric-based control law and robust features. A hand-based verification of the features is done to ensure the accuracy of the ground truth labels. Experiments were conducted to explore the capabilities of the developed approach. Results have shown R Squared metric with more than ninety percent on the trained GP model in noisy conditions. © 2021 IEEE

    Vision-based assistance for wheelchair navigation along corridors

    Get PDF
    International audienceIn case of motor impairments, steering a wheelchair can become a hazardous task. Typically, along corridors, joystick jerks induced by uncontrolled motions are source of wall collisions. This paper describes a vision based assistance solution for safe indoor semi-autonomous navigation purposes. To this aim, the control process is based on a visual servoing process designed for wall avoidance purposes. As the patient manually drives the wheelchair, a virtual guide is defined to progressively activate an automatic trajectory cor- rection. The proposed solution does not require any knowledge of the environment. Experiments have been conducted over corridors that present different configurations and illumination conditions. Results demonstrate the ability of the system to smoothly and adaptively assist people during their motions

    Appearance-based Indoor Navigation by IBVS using Line Segments

    Get PDF
    Also presented in IEEE Int. Conf. on Robotics and Automation, Stockolm, SwedenInternational audienc

    Low complex sensor-based shared control for power wheelchair navigation

    Get PDF
    International audienceMotor or visual impairments may prevent a user from steering a wheelchair effectively in indoor environments. In such cases, joystick jerks arising from uncontrolled motions may lead to collisions with obstacles. We here propose a perceptive shared control system that progressively corrects the trajectory as a user manually drives the wheelchair, by means of a sensor-based shared control law capable of smoothly avoiding obstacles. This control law is based on a low complex optimization framework validated through simulations and extensive clinical trials. The provided model uses distance information. Therefore, for low-cost considerations, we use ultrasonic sensors to measure the distances around the wheelchair. The solution therefore provides an efficient assistive tool that does not alter the quality of experience perceived by the user, while ensuring his security in hazardous situations

    Towards a Shared Control Navigation Function:Efficiency Based Command Modulation

    Get PDF
    This paper presents a novel shared control algorithm for robotized wheelchairs. The proposed algorithm is a new method to extend autonomous navigation techniques into the shared control domain. It reactively combines user’s and robot’s commands into a continuous function that approximates a classic Navigation Function (NF) by weighting input commands with NF constraints. Our approach overcomes the main drawbacks of NFs -calculus complexity and limitations on environment modeling- so it can be used in dynamic unstructured environments. It also benefits from NF properties: convergence to destination, smooth paths and safe navigation. Due to the user’s contribution to control, our function is not strictly a NF, so we call it a pseudo-navigation function (PNF) instead.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    Non-Metrical Navigation Through Visual Path Control

    Get PDF
    We describe a new method for wide-area, non-metrical robot navigationwhich enables useful, purposeful motion indoors. Our method has twophases: a training phase, in which a human user directs a wheeledrobot with an attached camera through an environment while occasionallysupplying textual place names; and a navigation phase in which theuser specifies goal place names (again as text), and the robot issueslow-level motion control in order to move to the specified place. We show thatdifferences in the visual-field locations and scales of features matched acrosstraining and navigation can be used to construct a simple and robust controlrule that guides the robot onto and along the training motion path.Our method uses an omnidirectional camera, requires approximateintrinsic and extrinsic camera calibration, and is capable of effective motioncontrol within an extended, minimally-prepared building environment floorplan.We give results for deployment within a single building floor with 7 rooms, 6corridor segments, and 15 distinct place names

    Trajectory optimization and motion planning for quadrotors in unstructured environments

    Get PDF
    Trajectory optimization and motion planning for quadrotors in unstructured environments Coming out from university labs robots perform tasks usually navigating through unstructured environment. The realization of autonomous motion in such type of environments poses a number of challenges compared to highly controlled laboratory spaces. In unstructured environments robots cannot rely on complete knowledge of their sorroundings and they have to continously acquire information for decision making. The challenges presented are a consequence of the high-dimensionality of the state-space and of the uncertainty introduced by modeling and perception. This is even more true for aerial-robots that has a complex nonlinear dynamics a can move freely in 3D-space. To avoid this complexity a robot have to select a small set of relevant features, reason on a reduced state space and plan trajectories on short-time horizon. This thesis is a contribution towards the autonomous navigation of aerial robots (quadrotors) in real-world unstructured scenarios. The first three chapters present a contribution towards an implementation of Receding Time Horizon Optimal Control. The optimization problem for a model based trajectory generation in environments with obstacles is set, using an approach based on variational calculus and modeling the robots in the SE(3) Lie Group of 3D space transformations. The fourth chapter explores the problem of using minimal information and sensing to generate motion towards a goal in an indoor bulding-like scenario. The fifth chapter investigate the problem of extracting visual features from the environment to control the motion in an indoor corridor-like scenario. The last chapter deals with the problem of spatial reasoning and motion planning using atomic proposition in a multi-robot environments with obstacles
    corecore