1,183 research outputs found

    Real-Time Hybrid Visual Servoing of a Redundant Manipulator via Deep Reinforcement Learning

    Get PDF
    Fixtureless assembly may be necessary in some manufacturing tasks and environ-ments due to various constraints but poses challenges for automation due to non-deterministic characteristics not favoured by traditional approaches to industrial au-tomation. Visual servoing methods of robotic control could be effective for sensitive manipulation tasks where the desired end-effector pose can be ascertained via visual cues. Visual data is complex and computationally expensive to process but deep reinforcement learning has shown promise for robotic control in vision-based manipu-lation tasks. However, these methods are rarely used in industry due to the resources and expertise required to develop application-specific systems and prohibitive train-ing costs. Training reinforcement learning models in simulated environments offers a number of benefits for the development of robust robotic control algorithms by reducing training time and costs, and providing repeatable benchmarks for which algorithms can be tested, developed and eventually deployed on real robotic control environments. In this work, we present a new simulated reinforcement learning envi-ronment for developing accurate robotic manipulation control systems in fixtureless environments. Our environment incorporates a contemporary collaborative industrial robot, the KUKA LBR iiwa, with the goal of positioning its end effector in a generic fixtureless environment based on a visual cue. Observational inputs are comprised of the robotic joint positions and velocities, as well as two cameras, whose positioning reflect hybrid visual servoing with one camera attached to the robotic end-effector, and another observing the workspace respectively. We propose a state-of-the-art deep reinforcement learning approach to solving the task environment and make prelimi-nary assessments of the efficacy of this approach to hybrid visual servoing methods for the defined problem environment. We also conduct a series of experiments ex-ploring the hyperparameter space in the proposed reinforcement learning method. Although we could not prove the efficacy of a deep reinforcement approach to solving the task environment with our initial results, we remain confident that such an ap-proach could be feasible to solving this industrial manufacturing challenge and that our contributions in this work in terms of the novel software provide a good basis for the exploration of reinforcement learning approaches to hybrid visual servoing in accurate manufacturing contexts

    A versatile and reconfigurable microassembly workstation

    Get PDF
    In this paper, a versatile and reconfigurable microassembly workstation designed and realized as a research tool for investigation of the problems in microassembly and micromanipulation processes and recent developments on mechanical and control structure of the system with respect to the previous workstation are presented. These developments include: (i) addition of a manipulator system to realize more complicated assembly and manipulation tasks, (ii) addition of extra DOF for the vision system and sample holder stages in order to make the system more versatile (iii) a new optical microscope as the vision system in order to visualize the microworld and determine the position and orientation of micro components to be assembled or manipulated, (iv) a modular control system hardware which allows handling more DOF. In addition several experiments using the workstation are presented in different modes of operation like tele-operated, semiautomated and fully automated by means of visual based schemes

    Gluing free assembly of an advanced 3D structure using visual servoing.

    No full text
    International audienceThe paper deals with robotic assembly of 5 parts by their U-grooves to achieve stables 3D MEMS, without any use of soldering effect. The parts and their grooves measure 400 m 400 m 100 m 1.5 m and 100 m 100 m 100 m 1.5 m leading to an assembly clearance ranging from -3 and +3 m. Two visual servo approaches are used simultaneously: 2D visual servo for gripping and release of parts and 3D visual servo for displacement of parts. The results of experiments are presented and analyzed

    Survey of Visual and Force/Tactile Control of Robots for Physical Interaction in Spain

    Get PDF
    Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors

    Cooperative tasks between humans and robots in industrial environments

    Get PDF
    Collaborative tasks between human operators and robotic manipulators can improve the performance and flexibility of industrial environments. Nevertheless, the safety of humans should always be guaranteed and the behaviour of the robots should be modified when a risk of collision may happen. This paper presents the research that the authors have performed in recent years in order to develop a human-robot interaction system which guarantees human safety by precisely tracking the complete body of the human and by activating safety strategies when the distance between them is too small. This paper not only summarizes the techniques which have been implemented in order to develop this system, but it also shows its application in three real human-robot interaction tasks.The research leading to these results has received funding from the European CommunityÊčs Seventh Framework Programme (FP7/2007‐2013) under Grant Agreement no. 231640 and the project HANDLE. This research has also been supported by the Spanish Ministry of Education and Science through the research project DPI2011‐22766

    Reliable vision-guided grasping

    Get PDF
    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system
    • 

    corecore