239 research outputs found

    Low-cost embedded system for relative localization in robotic swarms

    Get PDF
    In this paper, we present a small, light-weight, low-cost, fast and reliable system designed to satisfy requirements of relative localization within a swarm of micro aerial vehicles. The core of the proposed solution is based on off-the-shelf components consisting of the Caspa camera module and Gumstix Overo board accompanied by a developed efficient image processing method for detecting black and white circular patterns. Although the idea of the roundel recognition is simple, the developed system exhibits reliable and fast estimation of the relative position of the pattern up to 30 fps using the full resolution of the Caspa camera. Thus, the system is suited to meet requirements for a vision based stabilization of the robotic swarm. The intent of this paper is to present the developed system as an enabling technology for various robotic tasks

    Vision application of human robot interaction: Development of a ping pong playing robotic arm

    Get PDF
    Robotics is a science that is implemented parallel to human behavior. This work describes and implements techniques to mathematically model the game of ping pong played by the humans, and utilization of these methods in the design and development of a ping pong playing robotic arm as an application of robotic vision. Displaced frame difference (DFD) is used to segment the ball motion from background motion and parametric calibration of single CCD camera is utilized to track the ball in three dimensions. This visual information is temporally updated and further applied to guide a robot arm to hit the ball at a specified location in time. The results signify the system development based on single camera tracking and also demonstrate its working with self-sufficiency for the color of the ball. System latency is measured as a function of the camera interface, processor architecture, and robot motion. Various hardware and software parameters that influence the real time system performance are also discussed

    Machine-Vision-Based Pose Estimation System Using Sensor Fusion for Autonomous Satellite Grappling

    Get PDF
    When capturing a non-cooperative satellite during an on-orbit satellite servicing mission, the position and orientation (pose) of the satellite with respect to the servicing vessel is required in order to guide the robotic arm of the vessel towards the satellite. The main objective of this research is the development of a machine vision-based pose estimation system for capturing a non-cooperative satellite. The proposed system finds the satellite pose using three types of natural geometric features: circles, lines and points, and it merges data from two monocular cameras and three different algorithms (one for each type of geometric feature) to increase the robustness of the pose estimation. It is assumed that the satellite has an interface ring (which is used to attach a satellite to the launch vehicle) and that the cameras are mounted on the robot end effector which contains the capture tool to grapple the satellite. The three algorithms are based on a feature extraction and detection scheme to provide the detected geometric features on the camera images that belong to the satellite, which its geometry is assumed to be known. Since the projection of a circle on the image plane is an ellipse, an ellipse detection system is used to find the 3D-coordinates of the center of the interface ring and its normal vector using its corresponding detected ellipse on the image plane. The sensor and data fusion is performed in two steps. In the first step, a pose solver system finds pose using the conjugate gradient method to optimize a cost function and to reduce the re-projection error of the detected features, which reduces the pose estimation error. In the second step, an extended Kalman filter merges data from the pose solver and the ellipse detection system, and gives the final estimated pose. The inputs of the pose estimation system are the camera images and the outputs are the position and orientation of the satellite with respect to the end-effector where the cameras are mounted. Virtual and real simulations using a full-scale realistic satellite-mockup and a 7DOF robotic manipulator were performed to evaluate the system performance. Two different lighting conditions and three scenarios each with a different set of features were used. Tracking of the satellite was performed successfully. The total translation error is between 25 mm and 50 mm and the total rotation error is between 2 deg and 3 deg when the target is at 0.7 m from the end effector

    Vision-based autonomous aircraft payload delivery system

    Get PDF
    This research sought to design and develop an autonomous aircraft payload delivery system which utilised an onboard computer vision system for drop-zone identification. The research was tasked at achieving a modular system which could be used in the delivery of a given payload within a 5 m radius of designated drop-zone identifier. An integrated system was developed, where an autonomous flight controller, an onboard companion computer and computer vision system formed the physical hardware utilised to achieve the desired objectives. A Linux-based Robotic Operating System software architecture was used to develop the control algorithms which governed the autonomous flight control, object recognition and tracking through image processing, and payload release trajectory modelling of the system. The hardware and software architectures were integrated into a remote control fixed-wing aircraft for testing. Implementation of the system through simulation and physical testing proved successful and payload delivery was achieved at an altitude of 75 m, within an average displacement of 1.82 m from the true drop-zone location, where drop-zone detection and location were determined through autonomous survey over the approximate drop-zone’s location. This research furthered the development of autonomous aircraft delivery systems by introducing computer vision as a means of drop-zone location confirmation and authentication, allowing for greater payload delivery security and efficiency. The results gathered in this research illustrated the possible applications of modular airborne payload delivery systems into Industry 4.0 through the use of such a system in the service delivery sector

    SpiderFab: Process for On-Orbit Construction of Kilometer-Scale Apertures

    Get PDF
    The SpiderFab effort has investigated the value proposition and feasibility of radically changing the way we build and deploy spacecraft by enabling space systems to fabricate and integrate key components on-orbit. In this Phase II effort, we have focused on developing and demonstrating tools and processes to enable robotic systems to manufacture and assemble high performance structural elements that will serve as the support structures for components such as antennas and solar arrays. Through testing of these technologies in the laboratory environment,these efforts have established the technical feasibility of the key capabilities required for in-space manufacture of large apertures such as antennas, solar arrays, and optical systems,maturing prototype technical solutions for these capabilities to TRL-4. The SpiderFab effort has resulted in successful post-NIAC transition of the technology, first to SBIR-funded development of a technology for in-space manufacture (ISM) of truss structures, and then to a NASA/STMD Tipping Point Technologies funded effort to prepare a flight demonstration of ISM of a structure for a GEO communications satellite

    Distributed framework for a multi-purpose household robotic arm

    Get PDF
    Projecte final de carrera fet en col.laboració amb l'Institut de Robòtica i Informàtica IndustrialThe concept of household robotic servants has been in our mind for ages, and domestic appliances are far more robotised than they used to be. At present, manufacturers are starting to introduce small, household human-interactive robots to the market. Any human-interactive device has safety, endurability and simplicity constraints, which are especially strict when it comes to robots. Indeed, we are still far from a multi-purpose intelligent household robot, but human-interactive robots and arti cial intelligence research has evolved considerably, demonstration prototypes are a proof of what can be done. This project contributes to the research in humaninteractive robots, as the robotic arm and hand used are specially designed for human-interactive applications. The present study provides a distributed framework for an arm and a hand devices based on the robotics YARP protocol using the WAMTM arm and the BarrettHandTM as well as a basic modular client application complemented with vision. Firstly, two device drivers and a network interface are designed and implemented to control the WAMTM arm and the BarrettHandTM from the network. The drivers allow abstract access to each device, providing three ports: command requests port, state requests port and asynchronous replies port. Secondly, each driver is then encapsulated by YARP devices publishing realtime monitoring feedback and motion control to the network through what is called a Network wrapper. In particular, the network wrapper for the WAMTM arm and BarrettHandTM provides a state port, command port, Remote Procedure Call (RPC) port and an asynchronous noti cations port. The state port provides the WAMTM position and orientation feedback at 50 Hz, which represents a maximum blindness of one centimetre. This rst part of the project sets the foundations of a distributed, complete robot, whose design enables processing and power payload to be shared by di erent workstations. Moreover, users are able to work with the robot remotely over Ethernet and Wireless through a clear, understandable local interface within YARP. In addition to the distributed robotic framework provided, a client software framework with vision is also supplied. The client framework establishes a general software shell for further development and is organized in the basic, separate robotic branches: control, vision and plani cation. The vision module supports distributed image grabbing on mobile robotics, and shared-memory for xed, local vision. In order to incorporate environment interaction and robot autonomy with the planner, hand-eye transformation matrices have been obtained to perform object grasping and manipulation. The image processing is based on OpenCV libraries and provides object recognition with Scale Invariant Feature Transform (SIFT) features matching, Hough transform and polygon approximation algorithms. Grasping and path planning use pre-de ned grasps which take into account the size, shape and orientation of the target objects. The proof-of-concept applications feature a household robotic arm with the ability to tidy randomly distributed common kitchen objects to speci ed locations, with robot real-time monitoring and basic control. The device modularity introduced in this project philosophy of decoupling communication, device local access and the components, was successful. Thanks to the abstract access and decoupling, the demonstration applications provided were easily deployed to test the arm's performance and its remote control and monitorization. Moreover, both resultant frameworks are arm-independent and the design is currently being adopted by other projects' devices within the IRI

    A robotic platform for precision agriculture and applications

    Get PDF
    Agricultural techniques have been improved over the centuries to match with the growing demand of an increase in global population. Farming applications are facing new challenges to satisfy global needs and the recent technology advancements in terms of robotic platforms can be exploited. As the orchard management is one of the most challenging applications because of its tree structure and the required interaction with the environment, it was targeted also by the University of Bologna research group to provide a customized solution addressing new concept for agricultural vehicles. The result of this research has blossomed into a new lightweight tracked vehicle capable of performing autonomous navigation both in the open-filed scenario and while travelling inside orchards for what has been called in-row navigation. The mechanical design concept, together with customized software implementation has been detailed to highlight the strengths of the platform and some further improvements envisioned to improve the overall performances. Static stability testing has proved that the vehicle can withstand steep slopes scenarios. Some improvements have also been investigated to refine the estimation of the slippage that occurs during turning maneuvers and that is typical of skid-steering tracked vehicles. The software architecture has been implemented using the Robot Operating System (ROS) framework, so to exploit community available packages related to common and basic functions, such as sensor interfaces, while allowing dedicated custom implementation of the navigation algorithm developed. Real-world testing inside the university’s experimental orchards have proven the robustness and stability of the solution with more than 800 hours of fieldwork. The vehicle has also enabled a wide range of autonomous tasks such as spraying, mowing, and on-the-field data collection capabilities. The latter can be exploited to automatically estimate relevant orchard properties such as fruit counting and sizing, canopy properties estimation, and autonomous fruit harvesting with post-harvesting estimations.Le tecniche agricole sono state migliorate nel corso dei secoli per soddisfare la crescente domanda di aumento della popolazione mondiale. I recenti progressi tecnologici in termini di piattaforme robotiche possono essere sfruttati in questo contesto. Poiché la gestione del frutteto è una delle applicazioni più impegnative, a causa della sua struttura arborea e della necessaria interazione con l'ambiente, è stata oggetto di ricerca per fornire una soluzione personalizzata che sviluppi un nuovo concetto di veicolo agricolo. Il risultato si è concretizzato in un veicolo cingolato leggero, capace di effettuare una navigazione autonoma sia nello scenario di pieno campo che all'interno dei frutteti (navigazione interfilare). La progettazione meccanica, insieme all'implementazione del software, sono stati dettagliati per evidenziarne i punti di forza, accanto ad alcuni ulteriori miglioramenti previsti per incrementarne le prestazioni complessive. I test di stabilità statica hanno dimostrato che il veicolo può resistere a ripidi pendii. Sono stati inoltre studiati miglioramenti per affinare la stima dello slittamento che si verifica durante le manovre di svolta, tipico dei veicoli cingolati. L'architettura software è stata implementata utilizzando il framework Robot Operating System (ROS), in modo da sfruttare i pacchetti disponibili relativi a componenti base, come le interfacce dei sensori, e consentendo al contempo un'implementazione personalizzata degli algoritmi di navigazione sviluppati. I test in condizioni reali all'interno dei frutteti sperimentali dell'università hanno dimostrato la robustezza e la stabilità della soluzione con oltre 800 ore di lavoro sul campo. Il veicolo ha permesso di attivare e svolgere un'ampia gamma di attività agricole in maniera autonoma, come l'irrorazione, la falciatura e la raccolta di dati sul campo. Questi ultimi possono essere sfruttati per stimare automaticamente le proprietà più rilevanti del frutteto, come il conteggio e la calibratura dei frutti, la stima delle proprietà della chioma e la raccolta autonoma dei frutti con stime post-raccolta

    On Robotic Work-Space Sensing and Control

    Get PDF
    Industrial robots are fast and accurate when working with known objects at precise locations in well-structured manufacturing environments, as done in the classical automation setting. In one sense, limited use of sensors leaves robots blind and numb, unaware of what is happening in their surroundings. Whereas equipping a system with sensors has the potential to add new functionality and increase the set of uncertainties a robot can handle, it is not as simple as that. Often it is difficult to interpret the measurements and use them to draw necessary conclusions about the state of the work space. For effective sensor-based control, it is necessary to both understand the sensor data and to know how to act on it, giving the robot perception-action capabilities. This thesis presents research on how sensors and estimation techniques can be used in robot control. The suggested methods are theoretically analyzed and evaluated with a large focus on experimental verification in real-time settings. One application class treated is the ability to react fast and accurately to events detected by vision, which is demonstrated by the realization of a ball-catching robot. A new approach is proposed for performing high-speed color-based image analysis that is robust to varying illumination conditions and motion blur. Furthermore, a method for object tracking is presented along with a novel way of Kalman-filter initialization that can handle initial-state estimates with infinite variance. A second application class treated is robotic assembly using force control. A study of two assembly scenarios is presented, investigating the possibility of using force-controlled assembly in industrial robotics. Two new approaches for robotic contact-force estimation without any force sensor are presented and validated in assembly operations. The treated topics represent some of the challenges in sensor-based robot control, and it is demonstrated how they can be used to extend the functionality of industrial robots

    Autonomous Quadrotor Navigation by Detecting Vanishing Points in Indoor Environments

    Get PDF
    abstract: Toward the ambitious long-term goal of a fleet of cooperating Flexible Autonomous Machines operating in an uncertain Environment (FAME), this thesis addresses various perception and control problems in autonomous aerial robotics. The objective of this thesis is to motivate the use of perspective cues in single images for the planning and control of quadrotors in indoor environments. In addition to providing empirical evidence for the abundance of such cues in indoor environments, the usefulness of these perspective cues is demonstrated by designing a control algorithm for navigating a quadrotor in indoor corridors. An Extended Kalman Filter (EKF), implemented on top of the vision algorithm, serves to improve the robustness of the algorithm to changing illumination. In this thesis, vanishing points are the perspective cues used to control and navigate a quadrotor in an indoor corridor. Indoor corridors are an abundant source of parallel lines. As a consequence of perspective projection, parallel lines in the real world, that are not parallel to the plane of the camera, intersect at a point in the image. This point is called the vanishing point of the image. The vanishing point is sensitive to the lateral motion of the camera and hence the quadrotor. By tracking the position of the vanishing point in every image frame, the quadrotor can navigate along the center of the corridor. Experiments are conducted using the Augmented Reality (AR) Drone 2.0. The drone is equipped with the following componenets: (1) 720p forward facing camera for vanishing point detection, (2) 240p downward facing camera, (3) Inertial Measurement Unit (IMU) for attitude control , (4) Ultrasonic sensor for estimating altitude, (5) On-board 1 GHz Processor for processing low level commands. The reliability of the vision algorithm is presented by flying the drone in indoor corridors.Dissertation/ThesisMasters Thesis Electrical Engineering 201

    Behavior Design of Nao Humanoid Robot: a Case of Picking up the Ball and Throwing into the Box

    Get PDF
    This thesis introduces the behavior design of the Nao humanoid robot: a case of picking up the ball and throwing into the box. In this thesis, red ball recognition, strategy design for tracking the ball, picking up the ball and finding the box as well as the animation design are involved. In this thesis, Hough circle transform is utilized to detect the center of the ball precisely in the view of the robot in the vision system. In addition, probability theory is applied in designing and optimizing the strategy of picking up the ball autonomously. More importantly, the strategies of tracking the ball dynamically and calculating the distance to the box are designed based on mathematical models. And key frames in timeline are used to design the animations. Therefore, the robot can achieve this project successfully. This project was completed by using Python in Windows platform. And Eclipse was used as the environment for programming Python. Besides, OpenCV and Python were used to recognize the red ball. In addition, Choregraph was used to design the animations and Matlab was used to find the location of the box. The robot is the newest Nao robot which is version 5. In this thesis, the implementation method was mainly software engineering which involves planning, developing, debugging and testing. Moreover, achieving the autonomy of the robot, which is to make the robot be-have like a real human being, tends to be a popular field. And in this project, the robot achieves the ability of picking up the ball and throwing the ball into the box autonomously. And it can be concluded that mathematical theory support is very significant to achieve the autonomy of the robot
    corecore