7,964 research outputs found

    Knowledge-Based Control for Robot Arm

    Get PDF

    An Effective Multi-Cue Positioning System for Agricultural Robotics

    Get PDF
    The self-localization capability is a crucial component for Unmanned Ground Vehicles (UGV) in farming applications. Approaches based solely on visual cues or on low-cost GPS are easily prone to fail in such scenarios. In this paper, we present a robust and accurate 3D global pose estimation framework, designed to take full advantage of heterogeneous sensory data. By modeling the pose estimation problem as a pose graph optimization, our approach simultaneously mitigates the cumulative drift introduced by motion estimation systems (wheel odometry, visual odometry, ...), and the noise introduced by raw GPS readings. Along with a suitable motion model, our system also integrates two additional types of constraints: (i) a Digital Elevation Model and (ii) a Markov Random Field assumption. We demonstrate how using these additional cues substantially reduces the error along the altitude axis and, moreover, how this benefit spreads to the other components of the state. We report exhaustive experiments combining several sensor setups, showing accuracy improvements ranging from 37% to 76% with respect to the exclusive use of a GPS sensor. We show that our approach provides accurate results even if the GPS unexpectedly changes positioning mode. The code of our system along with the acquired datasets are released with this paper.Comment: Accepted for publication in IEEE Robotics and Automation Letters, 201

    Automated sequence and motion planning for robotic spatial extrusion of 3D trusses

    Full text link
    While robotic spatial extrusion has demonstrated a new and efficient means to fabricate 3D truss structures in architectural scale, a major challenge remains in automatically planning extrusion sequence and robotic motion for trusses with unconstrained topologies. This paper presents the first attempt in the field to rigorously formulate the extrusion sequence and motion planning (SAMP) problem, using a CSP encoding. Furthermore, this research proposes a new hierarchical planning framework to solve the extrusion SAMP problems that usually have a long planning horizon and 3D configuration complexity. By decoupling sequence and motion planning, the planning framework is able to efficiently solve the extrusion sequence, end-effector poses, joint configurations, and transition trajectories for spatial trusses with nonstandard topologies. This paper also presents the first detailed computation data to reveal the runtime bottleneck on solving SAMP problems, which provides insight and comparing baseline for future algorithmic development. Together with the algorithmic results, this paper also presents an open-source and modularized software implementation called Choreo that is machine-agnostic. To demonstrate the power of this algorithmic framework, three case studies, including real fabrication and simulation results, are presented.Comment: 24 pages, 16 figure

    Robot graphic simulation testbed

    Get PDF
    The objective of this research was twofold. First, the basic capabilities of ROBOSIM (graphical simulation system) were improved and extended by taking advantage of advanced graphic workstation technology and artificial intelligence programming techniques. Second, the scope of the graphic simulation testbed was extended to include general problems of Space Station automation. Hardware support for 3-D graphics and high processing performance make high resolution solid modeling, collision detection, and simulation of structural dynamics computationally feasible. The Space Station is a complex system with many interacting subsystems. Design and testing of automation concepts demand modeling of the affected processes, their interactions, and that of the proposed control systems. The automation testbed was designed to facilitate studies in Space Station automation concepts

    New hybrid control architecture for intelligent mobile robot navigation in a manufacturing environment

    Get PDF
    U radu je prikazana nova hibridna upravljačka arhitektura namenjena za eksploataciju i navigaciju inteligentnih mobilnih robota u tehnološkom okruženju. Arhitektura je bazirana na empirijskom upravljanju i implementaciji koncepta mašinskog učenja u vidu razvoja sistema veštačkih neuronskih mreža za potrebe generisanja inteligentnog ponašanja mobilnog robota. Za razliku od konvencionalne metodologije razvoja inteligentnih mobilnih robota, predložena arhitektura je razvijena na temeljima eksperimentalnog procesa i implementacije sistema veštačkih neuronskih mreža za potrebe generisanja inteligentnog ponašanja. Predložena metodologija razvoja i implementacije inteligentnih mobilnih robota treba da omogući nesmetanu i pouzdanu eksploataciju ali i robustnost u pogledu generisane upravljačke komande, kao odgovora robota na trenutno stanje tehnološkog okruženja.This paper presents a new hybrid control architecture for Intelligent Mobile Robot navigation based on implementation of Artificial Neural Networks for behavior generation. The architecture is founded on the use of Artificial Neural Networks for assemblage of fast reacting behaviors, obstacle detection and module for action selection based on environment classification. In contrast to standard formulation of robot behaviors, in proposed architecture there will be no explicit modeling of robot behaviors. Instead, the use of empirical data gathered in experimental process and Artificial Neural Networks should insure proper generation of particular behavior. In this way, the overall architectural response should be flexible and robust to failures, and consequently provide reliableness in exploitation. These issues are important especially if one takes under consideration that this particular architecture is being developed for mobile robot operating in manufacturing environment as a component of Intelligent Manufacturing System

    Motion Control of the Hybrid Wheeled-Legged Quadruped Robot Centauro

    Get PDF
    Emerging applications will demand robots to deal with a complex environment, which lacks the structure and predictability of the industrial workspace. Complex scenarios will require robot complexity to increase as well, as compared to classical topologies such as fixed-base manipulators, wheeled mobile platforms, tracked vehicles, and their combinations. Legged robots, such as humanoids and quadrupeds, promise to provide platforms which are flexible enough to handle real world scenarios; however, the improved flexibility comes at the cost of way higher control complexity. As a trade-off, hybrid wheeled-legged robots have been proposed, resulting in the mitigation of control complexity whenever the ground surface is suitable for driving. Following this idea, a new hybrid robot called Centauro has been developed inside the Humanoid and Human Centered Mechatronics lab at Istituto Italiano di Tecnologia (IIT). Centauro is a wheeled-legged quadruped with a humanoid bi-manual upper-body. Differently from other platform of similar concept, Centauro employs customized actuation units, which provide high torque outputs, moderately fast motions, and the possibility to control the exerted torque. Moreover, with more than forty motors moving its limbs, Centauro is a very redundant platform, with the potential to execute many different tasks at the same time. This thesis deals with the design and development of a software architecture, and a control system, tailored to such a robot; both wheeled and legged locomotion strategies have been studied, as well as prioritized, whole-body and interaction controllers exploiting the robot torque control capabilities, and capable to handle the system redundancy. A novel software architecture, made of (i) a real-time robotic middleware, and (ii) a framework for online, prioritized Cartesian controller, forms the basis of the entire work

    Human-Mechanical system interaction in Virtual Reality

    Get PDF
    The present work aims to show the great potential of Virtual Reality (VR) technologies in the field of Human-Robot Interaction (HRI). Indeed, it is foreseeable that in not too distant future cooperating robots will be increasingly present in human environments. Many authors actually believe that after the current information revolution, we will witness the so-called "robotics revolution", with the spread of increasingly intelligent and autonomous robots capable of moving into our own environments. Since these machines must be able to interact with human beings in a safe way, new design tools for the study of Human-Robot Interaction (HRI) are needed. The author believes that VR is an ideal design tool for the study of the interaction between humans and automatic machines, since it allows the designers to interact in real-time with virtual robotic systems and to evaluate different control algorithms, without the need of physical prototypes. This also shields the user from any risk related to the physical experimentation. However, VR technologies have also a more immediate application in the field of HRI, such as the study of usability of interfaces for real-time controlled robots. In fact, these robots, such as robots for microsurgery or even "teleoperated" robots working in a hostile environments, are already quite common. VR allows the designers to evaluate the usability of such interfaces by relating their physical input with a virtual output. In particular, the author has developed a new software application aimed at simulating automatic robots and, more generally, mechanical systems in a virtual environment. The user can interact with one or more virtual manipulators and also control them in real-time by means of several input devices. Finally, an innovative approach to the modeling and control of a humanoid robot with high degree of redundancy is discussed. VR implementation of a virtual humanoid is useful for the study of both humanoid robots and human beings
    corecore