10 research outputs found

    From Optimal Synthesis to Optimal Visual Servoing for Autonomous Vehicles

    Get PDF
    This thesis focuses on the characterization of optimal (shortest) paths to a desired position for a robot with unicycle kinematics and an on-board camera with limited Field-Of-View (FOV), which must keep a given feature in sight. In particular, I provide a complete optimal synthesis for the problem, i.e., a language of optimal control words, and a global partition of the motion plane induced by shortest paths, such that a word in the optimal language is univocally associated to a region and completely describes the shortest path from any starting point in that region to the goal point. Moreover, I provide a generalization to the case of arbitrary FOVs, including the case that the direction of motion is not an axis of symmetry for the FOV, and even that it is not contained in the FOV. Finally, based on the shortest path synthesis available, feedback control laws are defined for any point on the motion plane exploiting geometric properties of the synthesis itself. Moreover, by using a slightly generalized stability analysis setting, which is that of stability on a manifold, a proof of stability is given for the controlled system. At the end, simulation results are reported to demonstrate the effectiveness of the proposed technique

    Enhanced Image-Based Visual Servoing Dealing with Uncertainties

    Get PDF
    Nowadays, the applications of robots in industrial automation have been considerably increased. There is increasing demand for the dexterous and intelligent robots that can work in unstructured environment. Visual servoing has been developed to meet this need by integration of vision sensors into robotic systems. Although there has been significant development in visual servoing, there still exist some challenges in making it fully functional in the industry environment. The nonlinear nature of visual servoing and also system uncertainties are part of the problems affecting the control performance of visual servoing. The projection of 3D image to 2D image which occurs in the camera creates a source of uncertainty in the system. Another source of uncertainty lies in the camera and robot manipulator's parameters. Moreover, limited field of view (FOV) of the camera is another issues influencing the control performance. There are two main types of visual servoing: position-based and image-based. This project aims to develop a series of new methods of image-based visual servoing (IBVS) which can address the nonlinearity and uncertainty issues and improve the visual servoing performance of industrial robots. The first method is an adaptive switch IBVS controller for industrial robots in which the adaptive law deals with the uncertainties of the monocular camera in eye-in-hand configuration. The proposed switch control algorithm decouples the rotational and translational camera motions and decomposes the IBVS control into three separate stages with different gains. This method can increase the system response speed and improve the tracking performance of IBVS while dealing with camera uncertainties. The second method is an image feature reconstruction algorithm based on the Kalman filter which is proposed to handle the situation where the image features go outside the camera's FOV. The combination of the switch controller and the feature reconstruction algorithm can not only improve the system response speed and tracking performance of IBVS, but also can ensure the success of servoing in the case of the feature loss. Next, in order to deal with the external disturbance and uncertainties due to the depth of the features, the third new control method is designed to combine proportional derivative (PD) control with sliding mode control (SMC) on a 6-DOF manipulator. The properly tuned PD controller can ensure the fast tracking performance and SMC can deal with the external disturbance and depth uncertainties. In the last stage of the thesis, the fourth new semi off-line trajectory planning method is developed to perform IBVS tasks for a 6-DOF robotic manipulator system. In this method, the camera's velocity screw is parametrized using time-based profiles. The parameters of the velocity profile are then determined such that the velocity profile takes the robot to its desired position. This is done by minimizing the error between the initial and desired features. The algorithm for planning the orientation of the robot is decoupled from the position planning of the robot. This allows a convex optimization problem which lead to a faster and more efficient algorithm. The merit of the proposed method is that it respects all of the system constraints. This method also considers the limitation caused by camera's FOV. All the developed algorithms in the thesis are validated via tests on a 6-DOF Denso robot in an eye-in-hand configuration

    Visual Servoing in Robotics

    Get PDF
    Visual servoing is a well-known approach to guide robots using visual information. Image processing, robotics, and control theory are combined in order to control the motion of a robot depending on the visual information extracted from the images captured by one or several cameras. With respect to vision issues, a number of issues are currently being addressed by ongoing research, such as the use of different types of image features (or different types of cameras such as RGBD cameras), image processing at high velocity, and convergence properties. As shown in this book, the use of new control schemes allows the system to behave more robustly, efficiently, or compliantly, with fewer delays. Related issues such as optimal and robust approaches, direct control, path tracking, or sensor fusion are also addressed. Additionally, we can currently find visual servoing systems being applied in a number of different domains. This book considers various aspects of visual servoing systems, such as the design of new strategies for their application to parallel robots, mobile manipulators, teleoperation, and the application of this type of control system in new areas

    Vision-based automatic landing of a rotary UAV

    Get PDF
    A hybrid-like (continuous and discrete-event) approach to controlling a small multi-rotor unmanned aerial system (UAS) while landing on a moving platform is described. The landing scheme is based on positioning visual markers on a landing platform in a detectable pattern. After the onboard camera detects the object pattern, the inner control algorithm sends visual-based servo-commands to align the multi-rotor with the targets. This method is less computationally complex as it uses color-based object detection applied to a geometric pattern instead of feature tracking algorithms, and has the advantage of not requiring the distance to the objects to be calculated. The continuous approach accounts for the UAV and the platform rolling/pitching/yawing, which is essential for a real-time landing on a moving target such as a ship. A discrete-event supervisor working in parallel with the inner controller is designed to assist the automatic landing of a multi-rotor UAV on a moving target. This supervisory control strategy allows the pilot and crew to make time-critical decisions when exceptions, such as losing targets from the field of view, occur. The developed supervisor improves the low-level vision-based auto-landing system and high-level human-machine interface. The proposed hybrid-like approach was tested in simulation using a quadcopter model in Virtual Robotics Experimentation Platform (V-REP) working in parallel with Robot Operating System (ROS). Finally, this method was validated in a series of real-time experiments with indoor and outdoor quadcopters landing on both static and moving platforms. The developed prototype system has demonstrated the capability of landing within 25 cm of the desired point of touchdown. This auto-landing system is small (100 x 100 mm), light-weight (100 g), and consumes little power (under 2 W)

    Isn't Appearance Enough? - Nonlinear Observability and Observers for Appearance Localization, Mapping, Motion Reconstruction and Servoing Problems and their application to Vehicle Navigation

    Get PDF
    In this thesis we investigate how monocular image measurements can be used as the single source of information for a vehicle to sense and navigate through its surroundings. First we investigate what is the subset of vehicle location, environment mapping and vehicle motion that can be retrieved from images only. In particular, results apply to the case where no model of the vehicle, nor odometry or acceleration measurements are available. Then, we investigate the use of the information that can be extracted from images on visual servoing tasks and we define a servoing approach, named {\em Appearance Servoing}, that explicitly imposes the existing control constraints in the navigation of an Appearance Map. Finally, we present an experimental study case of the use of appearance where a sequence of images is used to construct a simple topological map of an office environment and then navigate a robot within it

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Towards Robust Visual-Controlled Flight of Single and Multiple UAVs in GPS-Denied Indoor Environments

    Get PDF
    Having had its origins in the minds of science fiction authors, mobile robot hardware has become reality many years ago. However, most envisioned applications have yet remained fictional - a fact that is likely to be caused by the lack of sufficient perception systems. In particular, mobile robots need to be aware of their own location with respect to their environment at all times to act in a reasonable manner. Nevertheless, a promising application for mobile robots in the near future could be, e.g., search and rescue tasks on disaster sites. Here, small and agile flying robots are an ideal tool to effectively create an overview of the scene since they are largely unaffected by unstructured environments and blocked passageways. In this respect, this thesis first explores the problem of ego-motion estimation for quadrotor Unmanned Aerial Vehicles (UAVs) based entirely on onboard sensing and processing hardware. To this end, cameras are an ideal choice as the major sensory modality. They are light, cheap, and provide a dense amount of information on the environment. While the literature provides camera-based algorithms to estimate and track the pose of UAVs over time, these solutions lack the robustness required for many real-world applications due to their inability to recover a loss of tracking fast. Therefore, in the first part of this thesis, a robust algorithm to estimate the velocity of a quadrotor UAV based on optical flow is presented. Additionally, the influence of the incorporated measurements from an Inertia Measurement Unit (IMU) on the precision of the velocity estimates is discussed and experimentally validated. Finally, we introduce a novel nonlinear observation scheme to recover the metric scale factor of the state estimate through fusion with acceleration measurements. This nonlinear model allows now to predict the convergence behavior of the presented filtering approach. All findings are experimentally evaluated, including the first presented human-controlled closed-loop flights based entirely on onboard velocity estimation. In the second part of this thesis, we address the problem of collaborative multi robot operations based on onboard visual perception. For instances of a direct line-of-sight between the robots, we propose a distributed formation control based on ego-motion detection and visually detected bearing angles between the members of the formation. To overcome the limited field of view of real cameras, we add an artificial yaw-rotation to track robots that would be invisible to static cameras. Afterwards, without the need for direct visual detections, we present a novel contribution to the mutual localization problem. In particular, we demonstrate a precise global localization of a monocular camera with respect to a dense 3D map. To this end, we propose an iterative algorithm that aims to estimate the location of the camera for which the photometric error between a synthesized view of the dense map and the real camera image is minimal

    Visual Servoing

    Get PDF
    The goal of this book is to introduce the visional application by excellent researchers in the world currently and offer the knowledge that can also be applied to another field widely. This book collects the main studies about machine vision currently in the world, and has a powerful persuasion in the applications employed in the machine vision. The contents, which demonstrate that the machine vision theory, are realized in different field. For the beginner, it is easy to understand the development in the vision servoing. For engineer, professor and researcher, they can study and learn the chapters, and then employ another application method

    Proceedings of the International Micro Air Vehicles Conference and Flight Competition 2017 (IMAV 2017)

    Get PDF
    The IMAV 2017 conference has been held at ISAE-SUPAERO, Toulouse, France from Sept. 18 to Sept. 21, 2017. More than 250 participants coming from 30 different countries worldwide have presented their latest research activities in the field of drones. 38 papers have been presented during the conference including various topics such as Aerodynamics, Aeroacoustics, Propulsion, Autopilots, Sensors, Communication systems, Mission planning techniques, Artificial Intelligence, Human-machine cooperation as applied to drones
    corecore