2,939 research outputs found

    A distributed optimization framework for localization and formation control: applications to vision-based measurements

    Full text link
    Multiagent systems have been a major area of research for the last 15 years. This interest has been motivated by tasks that can be executed more rapidly in a collaborative manner or that are nearly impossible to carry out otherwise. To be effective, the agents need to have the notion of a common goal shared by the entire network (for instance, a desired formation) and individual control laws to realize the goal. The common goal is typically centralized, in the sense that it involves the state of all the agents at the same time. On the other hand, it is often desirable to have individual control laws that are distributed, in the sense that the desired action of an agent depends only on the measurements and states available at the node and at a small number of neighbors. This is an attractive quality because it implies an overall system that is modular and intrinsically more robust to communication delays and node failures

    Fast, Accurate Thin-Structure Obstacle Detection for Autonomous Mobile Robots

    Full text link
    Safety is paramount for mobile robotic platforms such as self-driving cars and unmanned aerial vehicles. This work is devoted to a task that is indispensable for safety yet was largely overlooked in the past -- detecting obstacles that are of very thin structures, such as wires, cables and tree branches. This is a challenging problem, as thin objects can be problematic for active sensors such as lidar and sonar and even for stereo cameras. In this work, we propose to use video sequences for thin obstacle detection. We represent obstacles with edges in the video frames, and reconstruct them in 3D using efficient edge-based visual odometry techniques. We provide both a monocular camera solution and a stereo camera solution. The former incorporates Inertial Measurement Unit (IMU) data to solve scale ambiguity, while the latter enjoys a novel, purely vision-based solution. Experiments demonstrated that the proposed methods are fast and able to detect thin obstacles robustly and accurately under various conditions.Comment: Appeared at IEEE CVPR 2017 Workshop on Embedded Visio

    Perspective distortion modeling for image measurements

    Get PDF
    A perspective distortion modelling for monocular view that is based on the fundamentals of perspective projection is presented in this work. Perspective projection is considered to be the most ideal and realistic model among others, which depicts image formation in monocular vision. There are many approaches trying to model and estimate the perspective effects in images. Some approaches try to learn and model the distortion parameters from a set of training data that work only for a predefined structure. None of the existing methods provide deep understanding of the nature of perspective problems. Perspective distortions, in fact, can be described by three different perspective effects. These effects are pose, distance and foreshortening. They are the cause of the aberrant appearance of object shapes in images. Understanding these phenomena have long been an interesting topic for artists, designers and scientists. In many cases, this problem has to be necessarily taken into consideration when dealing with image diagnostics, high and accurate image measurement, as well as accurate pose estimation from images. In this work, a perspective distortion model for every effect is developed while elaborating the nature of perspective effects. A distortion factor for every effect is derived, then followed by proposed methods, which allows extracting the true target pose and distance, and correcting image measurements

    Simulation of Visual Servoing in Grasping Objects Moving by Newtonian Dynamics

    Get PDF
    Robot control systems and other manufacturing equipment are traditionally closed systems. This circumstance has hampered system integration of manipulators, sensors as well as other equipment, and such system integration has often been made at an unsuitably high hierarchical level. With the aid of vision, visual feedback is used to guide the robot manipulator to the target. This hand-to-target task is fairly easy if the target is static in Cartesian space. However, if the target is dynamic in motion, a model of the dynamics behaviour is required in order for the robot to track and intercept the target. The purpose of this project is to simulate in a virtual environment to show how to organise robot control systems with sensor integration. This project is a simulation that involves catching a thrown virtual ball using a six degree-of-freedom virtual robot and two virtual digital cameras. Tasks to be executed in this project include placement of virtual digital cameras, segmentation and tracking of the moving virtual ball as well as model-based prediction of the virtual ball's trajectory. Consideration have to be given to the placement of the virtual digital cameras so that the whole trajectory of the ball can be captured by both the virtual digital cameras simultaneously. In order to track the trajectory of the virtual ball, the image of the ball captured by the digital cameras has to be segmented from its background. Then a model is to be developed to predict the trajectory of the virtual ball so that the virtual robot can be controlled to align itself to grasp the moving virtual ball

    Planning and Control of Mobile Robots in Image Space from Overhead Cameras

    Get PDF
    In this work, we present a framework for the development of a planar mobile robot controller based on image plane feedback. We show that the design of such a motion controller can be accomplished in the image plane by making use of a subset of the parameters that relate the image plane to the ground plane, while still leveraging the simplifications offered by modeling the system as a differentially flat system. Our method relies on a waypoint-based trajectory generator, with all the waypoints specified in the image, as seen by an overhead observer. We present some results from simulation as well as from experiments that validate the ideas presented in this work and discuss some ideas for future wor

    Vision-based control of multi-agent systems

    Get PDF
    Scope and Methodology of Study: Creating systems with multiple autonomous vehicles places severe demands on the design of decision-making supervisors, cooperative control schemes, and communication strategies. In last years, several approaches have been developed in the literature. Most of them solve the vehicle coordination problem assuming some kind of communications between team members. However, communications make the group sensitive to failure and restrict the applicability of the controllers to teams of friendly robots. This dissertation deals with the problem of designing decentralized controllers that use just local sensor information to achieve some group goals.Findings and Conclusions: This dissertation presents a decentralized architecture for vision-based stabilization of unmanned vehicles moving in formation. The architecture consists of two main components: (i) a vision system, and (ii) vision-based control algorithms. The vision system is capable of recognizing and localizing robots. It is a model-based scheme composed of three main components: image acquisition and processing, robot identification, and pose estimation.Using vision information, we address the problem of stabilizing groups of mobile robots in leader- or two leader-follower formations. The strategies use relative pose between a robot and its designated leader or leaders to achieve formation objectives. Several leader-follower formation control algorithms, which ensure asymptotic coordinated motion, are described and compared. Lyapunov's stability theory-based analysis and numerical simulations in a realistic tridimensional environment show the stability properties of the control approaches

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available
    • …
    corecore