1,253 research outputs found

    A Factor Graph Approach to Multi-Camera Extrinsic Calibration on Legged Robots

    Full text link
    Legged robots are becoming popular not only in research, but also in industry, where they can demonstrate their superiority over wheeled machines in a variety of applications. Either when acting as mobile manipulators or just as all-terrain ground vehicles, these machines need to precisely track the desired base and end-effector trajectories, perform Simultaneous Localization and Mapping (SLAM), and move in challenging environments, all while keeping balance. A crucial aspect for these tasks is that all onboard sensors must be properly calibrated and synchronized to provide consistent signals for all the software modules they feed. In this paper, we focus on the problem of calibrating the relative pose between a set of cameras and the base link of a quadruped robot. This pose is fundamental to successfully perform sensor fusion, state estimation, mapping, and any other task requiring visual feedback. To solve this problem, we propose an approach based on factor graphs that jointly optimizes the mutual position of the cameras and the robot base using kinematics and fiducial markers. We also quantitatively compare its performance with other state-of-the-art methods on the hydraulic quadruped robot HyQ. The proposed approach is simple, modular, and independent from external devices other than the fiducial marker.Comment: To appear on "The Third IEEE International Conference on Robotic Computing (IEEE IRC 2019)

    A laboratory breadboard system for dual-arm teleoperation

    Get PDF
    The computing architecture of a novel dual-arm teleoperation system is described. The novelty of this system is that: (1) the master arm is not a replica of the slave arm; it is unspecific to any manipulator and can be used for the control of various robot arms with software modifications; and (2) the force feedback to the general purpose master arm is derived from force-torque sensor data originating from the slave hand. The computing architecture of this breadboard system is a fully synchronized pipeline with unique methods for data handling, communication and mathematical transformations. The computing system is modular, thus inherently extendable. The local control loops at both sites operate at 100 Hz rate, and the end-to-end bilateral (force-reflecting) control loop operates at 200 Hz rate, each loop without interpolation. This provides high-fidelity control. This end-to-end system elevates teleoperation to a new level of capabilities via the use of sensors, microprocessors, novel electronics, and real-time graphics displays. A description is given of a graphic simulation system connected to the dual-arm teleoperation breadboard system. High-fidelity graphic simulation of a telerobot (called Phantom Robot) is used for preview and predictive displays for planning and for real-time control under several seconds communication time delay conditions. High fidelity graphic simulation is obtained by using appropriate calibration techniques

    Synthesis and Validation of Vision Based Spacecraft Navigation

    Get PDF

    Development of a sensor coordinated kinematic model for neural network controller training

    Get PDF
    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented

    Robot Manipulators

    Get PDF
    Robot manipulators are developing more in the direction of industrial robots than of human workers. Recently, the applications of robot manipulators are spreading their focus, for example Da Vinci as a medical robot, ASIMO as a humanoid robot and so on. There are many research topics within the field of robot manipulators, e.g. motion planning, cooperation with a human, and fusion with external sensors like vision, haptic and force, etc. Moreover, these include both technical problems in the industry and theoretical problems in the academic fields. This book is a collection of papers presenting the latest research issues from around the world

    Dynamic Active Constraints for Surgical Robots using Vector Field Inequalities

    Full text link
    Robotic assistance allows surgeons to perform dexterous and tremor-free procedures, but robotic aid is still underrepresented in procedures with constrained workspaces, such as deep brain neurosurgery and endonasal surgery. In these procedures, surgeons have restricted vision to areas near the surgical tooltips, which increases the risk of unexpected collisions between the shafts of the instruments and their surroundings. In this work, our vector-field-inequalities method is extended to provide dynamic active-constraints to any number of robots and moving objects sharing the same workspace. The method is evaluated with experiments and simulations in which robot tools have to avoid collisions autonomously and in real-time, in a constrained endonasal surgical environment. Simulations show that with our method the combined trajectory error of two robotic systems is optimal. Experiments using a real robotic system show that the method can autonomously prevent collisions between the moving robots themselves and between the robots and the environment. Moreover, the framework is also successfully verified under teleoperation with tool-tissue interactions.Comment: Accepted on T-RO 2019, 19 Page

    Autonomous vehicle guidance in unknown environments

    Get PDF
    Gaining from significant advances in their performance granted by technological evolution, Autonomous Vehicles are rapidly increasing the number of fields of possible and effective applications. From operations in hostile, dangerous environments (military use in removing unexploded projectiles, survey of nuclear power and chemical industrial plants following accidents) to repetitive 24h tasks (border surveillance), from power-multipliers helping in production to less exotic commercial application in household activities (cleaning robots as consumer electronics products), the combination of autonomy and motion offers nowadays impressive options. In fact, an autonomous vehicle can be completed by a number of sensors, actuators, devices making it able to exploit a quite large number of tasks. However, in order to successfully attain these results, the vehicle should be capable to navigate its path in different, sometimes unknown environments. This is the goal of this dissertation: to analyze and - mainly - to propose a suitable solution for the guidance of autonomous vehicles. The frame in which this research takes its steps is the activity carried on at the Guidance and Navigation Lab of Sapienza – Università di Roma, hosted at the School of Aerospace Engineering. Indeed, the solution proposed has an intrinsic, while not limiting, bias towards possible space applications, that will become obvious in some of the following content. A second bias dictated by the Guidance and Navigation Lab activities is represented by the choice of a sample platform. In fact, it would be difficult to perform a meaningful study keeping it a very general level, independent on the characteristics of the targeted kind of vehicle: it is easy to see from the rough list of applications cited above that these characteristics are extremely varied. The Lab hosted – even before the beginning of this thesis activity – a simple, home-designed and manufactured model of a small, yet performing enough autonomous vehicle, called RAGNO (standing for Rover for Autonomous Guidance Navigation and Observation): it was an obvious choice to select that rover as the reference platform to identify solutions for guidance, and to use it, cooperating to its improvement, for the test activities which should be considered as mandatory in this kind of thesis work to validate the suggested approaches. The draft of the thesis includes four main chapters, plus introduction, final remarks and future perspectives, and the list of references. The first chapter (“Autonomous Guidance Exploiting Stereoscopic Vision”) investigates in detail the technique which has been deemed as the most interesting for small vehicles. The current availability of low cost, high performance cameras suggests the adoption of the stereoscopic vision as a quite effective technique, also capable to making available to remote crew a view of the scenario quite similar to the one humans would have. Several advanced image analysis techniques have been investigated for the extraction of the features from left- and right-eye images, with SURF and BRISK algorithm being selected as the most promising one. In short, SURF is a blob detector with an associated descriptor of 64 elements, where the generic feature is extracted by applying sequential box filters to the surrounding area. The features are then localized in the point of the image where the determinant of the Hessian matrix H(x,y) is maximum. The descriptor vector is than determined by calculating the Haar wavelet response in a sampling pattern centered in the feature. BRISK is instead a corner detector with an associated binary descriptor of 512 bit. The generic feature is identified as the brightest point in a sampling circular area of N pixels while the descriptor vector is calculated by computing the brightness gradient of each of the N(N-1)/2 pairs of sampling points. Once left and right features have been extracted, their descriptors are compared in order to determine the corresponding pairs. The matching criterion consists in seeking for the two descriptors for which their relative distance (Euclidean norm for SURF, Hamming distance for BRISK) is minimum. The matching process is computationally expensive: to reduce the required time the thesis successfully explored the theory of the epipolar geometry, based on the geometric constraint existing between the left and right projection of the scene point P, and indeed limiting the space to be searched. Overall, the selected techniques require between 200 and 300 ms on a 2.4GHz clock CPU for the feature extraction and matching in a single (left+right) capture, making it a feasible solution for slow motion vehicles. Once matching phase has been finalized, a disparity map can be prepared highlighting the position of the identified objects, and by means of a triangulation (the baseline between the two cameras is known, the size of the targeted object is measured in pixels in both images) the position and distance of the obstacles can be obtained. The second chapter (“A Vehicle Prototype and its Guidance System”) is devoted to the implementation of the stereoscopic vision onboard a small test vehicle, which is the previously cited RAGNO rover. Indeed, a description of the vehicle – the chassis, the propulsion system with four electric motors empowering the wheels, the good roadside performance attainable, the commanding options – either fully autonomous, partly autonomous with remote monitoring, or fully remotely controlled via TCP/IP on mobile networks - is included first, with a focus on different sensors that, depending on the scenario, can integrate the stereoscopic vision system. The intelligence-side of guidance subsystem, exploiting the navigation information provided by the camera, is then detailed. Two guidance techniques have been studied and implemented to identify the optimal trajectory in a field with scattered obstacles: the artificial potential guidance, based on the Lyapunov approach, and the A-star algorithm, looking for the minimum of a cost function built on graphs joining the cells of a mesh over-imposed to the scenario. Performance of the two techniques are assessed for two specific test-cases, and the possibility of unstable behavior of the artificial potential guidance, bouncing among local minima, has been highlighted. Overall, A-star guidance is the suggested solution in terms of time, cost and reliability. Notice that, withstanding the noise affecting information from sensors, an estimation process based on Kalman filtering has been also included in the process to improve the smoothness of the targeted trajectory. The third chapter (“Examples of Possible Missions and Applications”) reports two experimental campaigns adopting RAGNO for the detection of dangerous gases. In the first one, the rover accommodates a specific sensor, and autonomously moves in open fields, avoiding possible obstacles, to exploit measurements at given time intervals. The same configuration for RAGNO is also used in the second campaign: this time, however, the path of the rover is autonomously computed on the basis of the way points communicated by a drone which is flying above the area of measurements and identifies possible targets of interest. The fourth chapter (“Guidance of Fleet of Autonomous Vehicles ”) stresses this successful idea of fleet of vehicles, and numerically investigates by algorithms purposely written in Matlab the performance of a simple swarm of two rovers exploring an unknown scenario, pretending – as an example - to represent a case of planetary surface exploration. The awareness of the surrounding environment is dictated by the characteristics of the sensors accommodated onboard, which have been assumed on the basis of the experience gained with the material of previous chapter. Moreover, the communication issues that would likely affect real world cases are included in the scheme by the possibility to model the comm link, and by running the simulation in a multi-task configuration where the two rovers are assigned to two different computer processes, each of them having a different TCP/IP address with a behavior actually depending on the flow of information received form the other explorer. Even if at a simulation-level only, it is deemed that such a final step collects different aspects investigated during the PhD period, with feasible sensors’ characteristics (obviously focusing on stereoscopic vision), guidance technique, coordination among autonomous agents and possible interesting application cases

    Autonomous Visual Servo Robotic Capture of Non-cooperative Target

    Get PDF
    This doctoral research develops and validates experimentally a vision-based control scheme for the autonomous capture of a non-cooperative target by robotic manipulators for active space debris removal and on-orbit servicing. It is focused on the final capture stage by robotic manipulators after the orbital rendezvous and proximity maneuver being completed. Two challenges have been identified and investigated in this stage: the dynamic estimation of the non-cooperative target and the autonomous visual servo robotic control. First, an integrated algorithm of photogrammetry and extended Kalman filter is proposed for the dynamic estimation of the non-cooperative target because it is unknown in advance. To improve the stability and precision of the algorithm, the extended Kalman filter is enhanced by dynamically correcting the distribution of the process noise of the filter. Second, the concept of incremental kinematic control is proposed to avoid the multiple solutions in solving the inverse kinematics of robotic manipulators. The proposed target motion estimation and visual servo control algorithms are validated experimentally by a custom built visual servo manipulator-target system. Electronic hardware for the robotic manipulator and computer software for the visual servo are custom designed and developed. The experimental results demonstrate the effectiveness and advantages of the proposed vision-based robotic control for the autonomous capture of a non-cooperative target. Furthermore, a preliminary study is conducted for future extension of the robotic control with consideration of flexible joints

    Modeling and Control of the Cooperative Automated Fiber Placement System

    Get PDF
    The Automated Fiber Placement (AFP) machines have brought significant improvement on composite manufacturing. However, the current AFP machines are designed for the manufacture of simple structures like shallow shells or tubes, and not capable of handling some applications with more complex shapes. A cooperative AFP system is proposed to manufacture more complex composite components which pose high demand for trajectory planning than those by the current APF system. The system consists of a 6 degree-of-freedom (DOF) serial robot holding the fiber placement head, a 6-DOF revolute-spherical-spherical (RSS) parallel robot on which a 1-DOF mandrel holder is installed and an eye-to-hand photogrammetry sensor, i.e. C-track, to detect the poses of both end-effectors of parallel robot and serial robot. Kinematic models of the parallel robot and the serial robot are built. The analysis of constraints and singularities is conducted for the cooperative AFP system. The definitions of the tool frames for the serial robot and the parallel robot are illustrated. Some kinematic parameters of the parallel robot are calibrated using the photogrammetry sensor. Although, the cooperative AFP system increases the flexibility of composite manufacturing by adding more DOF, there might not be a feasible path for laying up the fiber in some cases due to the requirement of free from collisions and singularities. To meet the challenge, an innovative semi-offline trajectory synchronized algorithm is proposed to incorporate the on-line robot control in following the paths generated off-line especially when the generated paths are infeasible for the current multiple robots to realize. By adding correction to the path of the robots at the points where the collision and singularity occur, the fiber can be laid up continuously without interruption. The correction is calculated based on the pose tracking data of the parallel robot detected by the photogrammetry sensor on-line. Due to the flexibility of the 6-DOF parallel robot, the optimized offsets with varying movements are generated based on the different singularities and constraints. Experimental results demonstrate the successful avoidance of singularities and joint limits, and the designed cooperative AFP system can fulfill the movement needed for manufacturing a composite structure with Y-shape
    • …
    corecore