260 research outputs found

    Trajectory optimization and motion planning for quadrotors in unstructured environments

    Get PDF
    Trajectory optimization and motion planning for quadrotors in unstructured environments Coming out from university labs robots perform tasks usually navigating through unstructured environment. The realization of autonomous motion in such type of environments poses a number of challenges compared to highly controlled laboratory spaces. In unstructured environments robots cannot rely on complete knowledge of their sorroundings and they have to continously acquire information for decision making. The challenges presented are a consequence of the high-dimensionality of the state-space and of the uncertainty introduced by modeling and perception. This is even more true for aerial-robots that has a complex nonlinear dynamics a can move freely in 3D-space. To avoid this complexity a robot have to select a small set of relevant features, reason on a reduced state space and plan trajectories on short-time horizon. This thesis is a contribution towards the autonomous navigation of aerial robots (quadrotors) in real-world unstructured scenarios. The first three chapters present a contribution towards an implementation of Receding Time Horizon Optimal Control. The optimization problem for a model based trajectory generation in environments with obstacles is set, using an approach based on variational calculus and modeling the robots in the SE(3) Lie Group of 3D space transformations. The fourth chapter explores the problem of using minimal information and sensing to generate motion towards a goal in an indoor bulding-like scenario. The fifth chapter investigate the problem of extracting visual features from the environment to control the motion in an indoor corridor-like scenario. The last chapter deals with the problem of spatial reasoning and motion planning using atomic proposition in a multi-robot environments with obstacles

    Robust and Cooperative Image-Based Visual Servoing System Using a Redundant Architecture

    Get PDF
    The reliability and robustness of image-based visual servoing systems is still unsolved by the moment. In order to address this issue, a redundant and cooperative 2D visual servoing system based on the information provided by two cameras in eye-in-hand/eye-to-hand configurations is proposed. Its control law has been defined to assure that the whole system is stable if each subsystem is stable and to allow avoiding typical problems of image-based visual servoing systems like task singularities, features extraction errors, disappearance of image features, local minima, etc. Experimental results with an industrial robot manipulator based on Schunk modular motors to demonstrate the stability, performance and robustness of the proposed system are presented

    Simulation of Visual Servoing in Grasping Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems and other manufacturing equipment are traditionally closed systems. This circumstance has hampered system integration of manipulators, sensors as well as other equipment, and such system integration has often been made at an unsuitably high hierarchical level. With the aid of vision, visual feedback is used to guide the robot manipulator to the target. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamic in motion, a model of the dynamics behaviour is required in order for the robot to track and intercept the target. The purpose of this project is to simulate in a virtual environment to show how to organise robot control systems with sensor integration. This project is a simulation that involves catching a thrown virtual ball using a six degree-of-freedom virtual robot and two virtual digital cameras. Tasks to be executed in this project include placement of virtual digital cameras, segmentation and tracking of the moving virtual ball as well as model-based prediction of the virtual ball's trajectory. Consideration have to be given to the placement of the virtual digital cameras so that the whole trajectory of the ball can be captured by both the virtual digital cameras simultaneously. In order to track the trajectory of the virtual ball, the image of the ball captured by the digital cameras has to be segmented from its background. Then a model is to be developed to predict the trajectory of the virtual ball so that the virtual robot can be controlled to align itself to grasp the moving virtual ball

    Measurement errors in visual servoing

    Get PDF
    Abstract — In recent years, a number of hybrid visual servoing control algorithms have been proposed and evaluated. For some time now, it has been clear that classical control approaches — image and position based —- have some inherent problems. Hybrid approaches try to combine them in order to overcome these problems. However, most of the proposed approaches concentrate mainly on the design of the control law, neglecting the issue of errors resulting from the sensory system. This work deals with the effect of measurement errors in visual servoing. The particular contribution of this paper is the analysis of the propagation of image error through pose estimation and visual servoing control law. We have chosen to investigate the properties of the vision system and their effect to the performance of the control system. Two approaches are evaluated: i) position, and ii) 2 1/2 D visual servoing. We believe that our evaluation offers a valid tool to build and analyze hybrid control systems based on, for example, switching [1] or partitioning [2]. I

    A Hybrid Visual Control Scheme to Assist the Visually Impaired with Guided Reaching Tasks

    Get PDF
    In recent years, numerous researchers have been working towards adapting technology developed for robotic control to use in the creation of high-technology assistive devices for the visually impaired. These types of devices have been proven to help visually impaired people live with a greater degree of confidence and independence. However, most prior work has focused primarily on a single problem from mobile robotics, namely navigation in an unknown environment. In this work we address the issue of the design and performance of an assistive device application to aid the visually-impaired with a guided reaching task. The device follows an eye-in-hand, IBLM visual servoing configuration with a single camera and vibrotactile feedback to the user to direct guided tracking during the reaching task. We present a model for the system that employs a hybrid control scheme based on a Discrete Event System (DES) approach. This approach avoids significant problems inherent in the competing classical control or conventional visual servoing models for upper limb movement found in the literature. The proposed hybrid model parameterizes the partitioning of the image state-space that produces a variable size targeting window for compensatory tracking in the reaching task. The partitioning is created through the positioning of hypersurface boundaries within the state space, which when crossed trigger events that cause DES-controller state transition that enable differing control laws. A set of metrics encompassing, accuracy (DD), precision (Ξe\theta_{e}), and overall tracking performance (ψ\psi) are also proposed to quantity system performance so that the effect of parameter variations and alternate controller configurations can be compared. To this end, a prototype called \texttt{aiReach} was constructed and experiments were conducted testing the functional use of the system and other supporting aspects of the system behaviour using participant volunteers. Results are presented validating the system design and demonstrating effective use of a two parameter partitioning scheme that utilizes a targeting window with additional hysteresis region to filtering perturbations due to natural proprioceptive limitations for precise control of upper limb movement. Results from the experiments show that accuracy performance increased with the use of the dual parameter hysteresis target window model (0.91≀D≀10.91 \leq D \leq 1, ÎŒ(D)=0.9644\mu(D)=0.9644, σ(D)=0.0172\sigma(D)=0.0172) over the single parameter fixed window model (0.82≀D≀0.980.82 \leq D \leq 0.98, ÎŒ(D)=0.9205\mu(D)=0.9205, σ(D)=0.0297\sigma(D)=0.0297) while the precision metric, Ξe\theta_{e}, remained relatively unchanged. In addition, the overall tracking performance metric produces scores which correctly rank the performance of the guided reaching tasks form most difficult to easiest
    • 

    corecore