348 research outputs found

    Tactile Sensing for Robotic Applications

    Get PDF
    This chapter provides an overview of tactile sensing in robotics. This chapter is an attempt to answer three basic questions: \u2022 What is meant by Tactile Sensing? \u2022 Why Tactile Sensing is important? \u2022 How Tactile Sensing is achieved? The chapter is organized to sequentially provide the answers to above basic questions. Tactile sensing has often been considered as force sensing, which is not wholly true. In order to clarify such misconceptions about tactile sensing, it is defined in section 2. Why tactile section is important for robotics and what parameters are needed to be measured by tactile sensors to successfully perform various tasks, are discussed in section 3. An overview of `How tactile sensing has been achieved\u2019 is given in section 4, where a number of technologies and transduction methods, that have been used to improve the tactile sensing capability of robotic devices, are discussed. Lack of any tactile analog to Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Devices (CCD) optical arrays has often been cited as one of the reasons for the slow development of tactile sensing vis-\ue0-vis other sense modalities like vision sensing. Our own contribution \u2013 development of tactile sensing arrays using piezoelectric polymers and involving silicon micromachining - is an attempt in the direction of achieving tactile analog of CMOS optical arrays. The first phase implementation of these tactile sensing arrays is discussed in section 5. Section 6 concludes the chapter with a brief discussion on the present status of tactile sensing and the challenges that remain to be solved

    Bin-picking de precisão usando um sensor 3D e um sensor laser 1D

    Get PDF
    The technique that is being used by a robot to grab objects that are randomly placed inside a box or on a pallet is called bin-picking. This process is of great interest in an industrial environment as it provides enhanced automation, increased production and cost reduction. Bin-picking has evolved greatly over the years due to tremendous strides empowered by advanced vision technology, software, and gripping solutions which are in constant development. However, the creation of a versatile system, capable of collecting any type of object without deforming it, regardless of the disordered environment around it, remains a challenge. To this goal, the use of 3D perception is unavoidable. Still, the information acquired by some lower cost 3D sensors is not very precise; therefore, the combination of this information with the one of other devices is an approach already in study. The main goal of this work is to develop a solution for the execution of a precise bin-picking process capable of grasping small and fragile objects without breaking or deforming them. This may be done by combining the information provided by two sensors: one 3D sensor (Kinect) used to analyse the workspace and identify the object, and a 1D laser sensor to determine the exact distance to the object when approaching it. Additionally, the developed system may be placed at the end of a manipulator in order to become an active perception unit. Once the global system of sensors, their controllers and the robotic manipulator are integrated into a ROS (Robot Operating System) infrastructure, the data provided by the sensors can be analysed and combined to provide a bin-picking solution. Finally, the testing phase demonstrated the viability and the reliability of the developed bin-picking process.À tecnologia usada por um robô para agarrar objetos que estão dispostos de forma aleatória dentro de uma caixa ou sobre uma palete chama-se binpicking. Este processo é de grande interesse para a industria uma vez que oferece maior autonomia, aumento de produção e redução de custos. O binpicking tem evoluido de forma significativa ao longo dos anos graças aos avanços possibilitados pelo desenvolvimento tecnológico na área da visão, software e soluções de diferentes garras que estão em constante evolução. Contudo, a criação de um sistema versátil, capaz de agarrar qualquer tipo de objeto sem o deformar, independentemente do ambiente desordenado à sua volta, continua a ser o principal objetivo. Para esse fim, o recurso à perceção 3D é imprescindível. Ainda assim, a informação adquirida por sensores 3D não é muito precisa e, por isso, a combinação deste com a de outros dispositivos é uma abordagem ainda em estudo. O objetivo principal deste trabalho é então desenvolver uma solução para a execução de um processo de bin-picking capaz de agarrar objetos pequenos e frágeis sem os partir ou deformar. Isto poderá ser feito através da combinação entre a informação proveniente de dois sensores: um sensor 3D (Kinect) usado para analisar o espaço de trabalho e identificar o objeto, e um sensor laser 1D usado para determinar a sua distância exata e assim se poder aproximar. Adicionalmente, o sistema desenvolvido pode ser acoplado a um manipulador de forma a criar uma unidade de perceção ativa. Uma vez tendo um sistema global de sensores, os seus controladores e o manipulador robótico integrados numa infraestrutura ROS (Robot Operating System), os dados fornecidos pelos sensores podem ser analisados e combinados, e uma solução de bin-picking pode ser desenvolvida. Por último, a fase de testes demonstrou, depois de alguns ajustes nas medidas do sensor laser, a viabilidade e fiabilidade do processo de bin-picking desenvolvido.Mestrado em Engenharia Mecânic

    An intelligent multi-floor mobile robot transportation system in life science laboratories

    Get PDF
    In this dissertation, a new intelligent multi-floor transportation system based on mobile robot is presented to connect the distributed laboratories in multi-floor environment. In the system, new indoor mapping and localization are presented, hybrid path planning is proposed, and an automated doors management system is presented. In addition, a hybrid strategy with innovative floor estimation to handle the elevator operations is implemented. Finally the presented system controls the working processes of the related sub-system. The experiments prove the efficiency of the presented system

    Design and Development of Robotic Part Assembly System under Vision Guidance

    Get PDF
    Robots are widely used for part assembly across manufacturing industries to attain high productivity through automation. The automated mechanical part assembly system contributes a major share in production process. An appropriate vision guided robotic assembly system further minimizes the lead time and improve quality of the end product by suitable object detection methods and robot control strategies. An approach is made for the development of robotic part assembly system with the aid of industrial vision system. This approach is accomplished mainly in three phases. The first phase of research is mainly focused on feature extraction and object detection techniques. A hybrid edge detection method is developed by combining both fuzzy inference rule and wavelet transformation. The performance of this edge detector is quantitatively analysed and compared with widely used edge detectors like Canny, Sobel, Prewitt, mathematical morphology based, Robert, Laplacian of Gaussian and wavelet transformation based. A comparative study is performed for choosing a suitable corner detection method. The corner detection technique used in the study are curvature scale space, Wang-Brady and Harris method. The successful implementation of vision guided robotic system is dependent on the system configuration like eye-in-hand or eye-to-hand. In this configuration, there may be a case that the captured images of the parts is corrupted by geometric transformation such as scaling, rotation, translation and blurring due to camera or robot motion. Considering such issue, an image reconstruction method is proposed by using orthogonal Zernike moment invariants. The suggested method uses a selection process of moment order to reconstruct the affected image. This enables the object detection method efficient. In the second phase, the proposed system is developed by integrating the vision system and robot system. The proposed feature extraction and object detection methods are tested and found efficient for the purpose. In the third stage, robot navigation based on visual feedback are proposed. In the control scheme, general moment invariants, Legendre moment and Zernike moment invariants are used. The selection of best combination of visual features are performed by measuring the hamming distance between all possible combinations of visual features. This results in finding the best combination that makes the image based visual servoing control efficient. An indirect method is employed in determining the moment invariants for Legendre moment and Zernike moment. These moments are used as they are robust to noise. The control laws, based on these three global feature of image, perform efficiently to navigate the robot in the desire environment

    Automated freeform assembly of threaded fasteners

    Get PDF
    Over the past two decades, a major part of the manufacturing and assembly market has been driven by its customer requirements. Increasing customer demand for personalised products create the demand for smaller batch sizes, shorter production times, lower costs, and the flexibility to produce families of products - or different parts - with the same sets of equipment. Consequently, manufacturing companies have deployed various automation systems and production strategies to improve their resource efficiency and move towards right-first-time production. However, many of these automated systems, which are involved with robot-based, repeatable assembly automation, require component- specific fixtures for accurate positioning and extensive robot programming, to achieve flexibility in their production. Threaded fastening operations are widely used in assembly. In high-volume production, the fastening processes are commonly automated using jigs, fixtures, and semi-automated tools. This form of automation delivers reliable assembly results at the expense of flexibility and requires component variability to be adequately controlled. On the other hand, in low- volume, high- value manufacturing, fastening processes are typically carried out manually by skilled workers. This research is aimed at addressing the aforementioned issues by developing a freeform automated threaded fastener assembly system that uses 3D visual guidance. The proof-of-concept system developed focuses on picking up fasteners from clutter, identifying a hole feature in an imprecisely positioned target component and carry out torque-controlled fastening. This approach has achieved flexibility and adaptability without the use of dedicated fixtures and robot programming. This research also investigates and evaluates different 3D imaging technology to identify the suitable technology required for fastener assembly in a non-structured industrial environment. The proposed solution utilises the commercially available technologies to enhance the precision and speed of identification of components for assembly processes, thereby improving and validating the possibility of reliably implementing this solution for industrial applications. As a part of this research, a number of novel algorithms are developed to robustly identify assembly components located in a random environment by enhancing the existing methods and technologies within the domain of the fastening processes. A bolt identification algorithm was developed to identify bolts located in a random clutter by enhancing the existing surface-based matching algorithm. A novel hole feature identification algorithm was developed to detect threaded holes and identify its size and location in 3D. The developed bolt and feature identification algorithms are robust and has sub-millimetre accuracy required to perform successful fastener assembly in industrial conditions. In addition, the processing time required for these identification algorithms - to identify and localise bolts and hole features - is less than a second, thereby increasing the speed of fastener assembly

    Locomotion Optimization of Photoresponsive Small-scale Robot: A Deep Reinforcement Learning Approach

    Get PDF
    Soft robots comprise of elastic and flexible structures, and actuatable soft materials are often used to provide stimuli-responses, remotely controlled with different kinds of external stimuli, which is beneficial for designing small-scale devices. Among different stimuli-responsive materials, liquid crystal networks (LCNs) have gained a significant amount of attention for soft small-scale robots in the past decade being stimulated and actuated by light, which is clean energy, able to transduce energy remotely, easily available and accessible to sophisticated control. One of the persistent challenges in photoresponsive robotics is to produce controllable autonomous locomotion behavior. In this Thesis, different types of photoresponsive soft robots were used to realize light-powered locomotion, and an artificial intelligence-based approach was developed for controlling the movement. A robot tracking system, including an automatic laser steering function, was built for efficient robotic feature detection and steering the laser beam automatically to desired locations. Another robot prototype, a swimmer robot, driven by the automatically steered laser beam, showed directional movements including some degree of uncertainty and randomness in their locomotion behavior. A novel approach is developed to deal with the challenges related to the locomotion of photoresponsive swimmer robots. Machine learning, particularly deep reinforcement learning method, was applied to develop a control policy for autonomous locomotion behavior. This method can learn from its experiences by interacting with the robot and its environment without explicit knowledge of the robot structure, constituent material, and robotic mechanics. Due to the requirement of a large number of experiences to correlate the goodness of behavior control, a simulator was developed, which mimicked the uncertain and random movement behavior of the swimmer robots. This approach effectively adapted the random movement behaviors and developed an optimal control policy to reach different destination points autonomously within a simulated environment. This work has successfully taken a step towards the autonomous locomotion control of soft photoresponsive robots

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    Research and development of a rescue robot end-effector

    Get PDF
    Includes abstract.Includes bibliographical references.This report details the research, design, development and testing of an end-effector system for use on an Urban Search and Rescue (USAR) robot which is in development in the Robotics and Agents Research Laboratory (RARL) at the University of Cape Town (UCT). This is the 5th generation Mobile Robot Platform (MRP) that UCT has developed ... codenamed ‘Ratel’. USAR robots used to be mainly of the observation type, but new robots (including UCT’s Ratel MRP) are being developed to deal with inherently dynamic, complex and unpredictable disaster response situations, particularly related to object manipulation and gripping. In order to actively interact with the environment, a flexible and robust gripping system is vital. [an] end-effector solution ... was developed for the Ratel manipulator arm to fulfil these functions

    Design of a Multi-Mode Hybrid Micro-Gripper for Surface Mount Technology Component Assembly

    Get PDF
    In the last few decades, industrial sectors such as smart manufacturing and aerospace have rapidly developed, contributing to the increase in production of more complex electronic boards based on SMT (Surface Mount Technology). The assembly phases in manufacturing these electronic products require the availability of technological solutions able to deal with many heterogeneous products and components. The small batch production and pre-production are often executed manually or with semi-automated stations. The commercial automated machines currently available offer high performance, but they are highly rigid. Therefore, a great effort is needed to obtain machines and devices with improved reconfigurability and flexibility for minimizing the set-up time and processing the high heterogeneity of components. These high-level objectives can be achieved acting in different ways. Indeed, a work station can be seen as a set of devices able to interact and cooperate to perform a specific task. Therefore, the reconfigurability of a work station can be achieved through reconfigurable and flexible devices and their hardware and software integration and control For this reason, significant efforts should be focused on the conception and development of innovative devices to cope with the continuous downscaling and increasing variety of the products in this growing field. In this context, this paper presents the design and development of a multi-mode hybrid micro-gripper devoted to manipulate and assemble a wide range of micro- and meso-SMT components with different dimensions and proprieties. It exploits two different handling technologies: the vacuum and friction

    Technology for the Future: In-Space Technology Experiments Program, part 2

    Get PDF
    The purpose of the Office of Aeronautics and Space Technology (OAST) In-Space Technology Experiments Program In-STEP 1988 Workshop was to identify and prioritize technologies that are critical for future national space programs and require validation in the space environment, and review current NASA (In-Reach) and industry/ university (Out-Reach) experiments. A prioritized list of the critical technology needs was developed for the following eight disciplines: structures; environmental effects; power systems and thermal management; fluid management and propulsion systems; automation and robotics; sensors and information systems; in-space systems; and humans in space. This is part two of two parts and contains the critical technology presentations for the eight theme elements and a summary listing of critical space technology needs for each theme
    corecore