3,846 research outputs found

    A fast 3-D object recognition algorithm for the vision system of a special-purpose dexterous manipulator

    Get PDF
    A fast 3-D object recognition algorithm that can be used as a quick-look subsystem to the vision system for the Special-Purpose Dexterous Manipulator (SPDM) is described. Global features that can be easily computed from range data are used to characterize the images of a viewer-centered model of an object. This algorithm will speed up the processing by eliminating the low level processing whenever possible. It may identify the object, reject a set of bad data in the early stage, or create a better environment for a more powerful algorithm to carry the work further

    Vision assisted robotic assembly of miniature parts

    Get PDF
    The objective of this thesis was to design and build a work cell that utilized a robotic vision system to assemble a micro motor. The work cell design consisted of an Adept SCARA robot with a vision system, end-of-arm tooling, and a combination work surface/ lighting system. On-screen vision tools were applied to images of the parts, both binary and gray scale, to assist in accurate part recognition. Using these images, retained by the vision system as reference, the robotic vision software was programmed to enable the robot to pick up the various parts in random positions within the cell and assemble them into the micro motor. The techniques used allow for accurate identification of the parts with no concern for orientation during part presentation for pickup or placement. The micro motor assembly involves three parts, two of which mate intimately with one another and have close clearances. The whole assembly is complicated by the small size of the individual parts. The assembly of the motor was quite difficult to accomplish and had numerous problems associated with it that needed to be overcome

    Assembly via disassembly: A case in machine perceptual development

    Get PDF
    First results in the effort of learning about representations of objects is presented. The questions attempted to be answered are: What is innate and what must be derived from the environment. The problem is casted in the framework of disassembly of an object into two parts

    Unmanned Aerial Vehicle Ground Control Point Deployment

    Get PDF
    According to Federal Aviation Regulation (FAR) Part 77, all airports have imaginary approach surfaces which must remain clear of obstructions in order to ensure safe air travel. Threats of penetration to these imaginary surfaces include new construction, telephone poles and lines, and trees. While most potential threats analyzed remain relatively constant in size, objects such as trees which grow require annual analysis for change detection. A variety of methods are available for surveying these surfaces for potential obstructions, one being an aerial mapping from photogrammetric data. Aerial mapping for surveying purposes is a process which ties overlapping photographs together using computer software which detects similar points between the images. These images requires ground control points, also known as GCPs, to create a scale which allows for accurate measurement data. When ground control points with known GPS locations are placed throughout the mapping area all the points within the model can then be tied to their respective GPS coordinates in the longitudinal, latitudinal, and altitude directions. The placement of these markers is one of the most time-consuming but necessary tasks when creating an aerial map. The main objective of this project was for the team to design and produce a method of streamlining the ground control point deployment process. This report introduces a device which, when implemented, can reduce the number of resources and labor needed for this process. A mechanical release system was designed and built to be carried by a drone. Remote control between two XBee RF modules was implemented to activate the rotating notch release system, following user commands. A 90 degree rotation by the servo of the flange would align the flange with the keyslot in the GCP, allowing it to fall. The final design was successfully operated by one pilot and three GCP’s were deployed at various locations. An important factor in designing this system was developing lightweight, high contrast GCPs that would not impact the flight of the drone. The weight and balance of the final product were suitable for the small 3DR Solo drone used in the project. This accomplishment proves this system could be applied to drones with greater payloads and longer flight times for preparing large surveying areas in minimal time

    How do robots take two parts apart

    Get PDF
    This research is a natural progression of efforts which begun with the introduction of a new research paradigm in machine perception, called Active Perception. There it was stated that Active Perception is a problem of intelligent control strategies applied to data acquisition processes which will depend on the current state of the data interpretation, including recognition. The disassembly/assembly problem is treated as an Active Perception problem, and a method for autonomous disassembly based on this framework is presented

    Detection and Pose Determination of a Part for Bin Picking

    Get PDF
    Tato diplomová práce se zabývá tématem vizuální úlohy vybírání, ve které se postupně vybírají součástky z bedny. Částečně strukturovná varianta této úlohy je uvažována. V této diplomové práci se navrhuje řešení této úlohy, které je záloženo na konvolučních neuronových sítích. Díky tomu, programová specifikace geometrie součástky není nutná. Návrhovaný systém odhaduje pozici a orientaci součástky a detekuje překrytí součástek. Systém byl implementován a otestován s použitím kovové součástky. Odhadovaná kvalita systému je 95 % úspěšných pokusů vybrání.This thesis discusses the visual bin picking task, which is the task of sequential unloading a bin one part at a time using a camera as a primary source of information. The semi-structured variant of the bin picking task is considered. In this thesis, a solution for this problem that is based on learning the appearance model of a part using convolutional neural networks is proposed. Thus, no hard-coded geometry of a part is required. The models in the developed system predict the poses of the parts and detect occlusions. The proposed system has been implemented and tested with a metallic strut bracket. The experiments have shown that the achieved estimated success rate of the system is 95% of acquiring attempts

    CHARMIE: a collaborative healthcare and home service and assistant robot for elderly care

    Get PDF
    The global population is ageing at an unprecedented rate. With changes in life expectancy across the world, three major issues arise: an increasing proportion of senior citizens; cognitive and physical problems progressively affecting the elderly; and a growing number of single-person households. The available data proves the ever-increasing necessity for efficient elderly care solutions such as healthcare service and assistive robots. Additionally, such robotic solutions provide safe healthcare assistance in public health emergencies such as the SARS-CoV-2 virus (COVID-19). CHARMIE is an anthropomorphic collaborative healthcare and domestic assistant robot capable of performing generic service tasks in non-standardised healthcare and domestic environment settings. The combination of its hardware and software solutions demonstrates map building and self-localisation, safe navigation through dynamic obstacle detection and avoidance, different human-robot interaction systems, speech and hearing, pose/gesture estimation and household object manipulation. Moreover, CHARMIE performs end-to-end chores in nursing homes, domestic houses, and healthcare facilities. Some examples of these chores are to help users transport items, fall detection, tidying up rooms, user following, and set up a table. The robot can perform a wide range of chores, either independently or collaboratively. CHARMIE provides a generic robotic solution such that older people can live longer, more independent, and healthier lives.This work has been supported by FCT—Fundação para a Ciência e Tecnologia within the R&D Units Project Scope: UIDB/00319/2020. The author T.R. received funding through a doctoral scholarship from the Portuguese Foundation for Science and Technology (Fundação para a Ciência e a Tecnologia) [grant number SFRH/BD/06944/2020], with funds from the Portuguese Ministry of Science, Technology and Higher Education and the European Social Fund through the Programa Operacional do Capital Humano (POCH). The author F.G. received funding through a doctoral scholarship from the Portuguese Foundation for Science and Technology (Fundação para a Ciência e a Tecnologia) [grant number SFRH/BD/145993/2019], with funds from the Portuguese Ministry of Science, Technology and Higher Education and the European Social Fund through the Programa Operacional do Capital Humano (POCH)

    Machine vision for industry

    Get PDF
    Nowadays, industry is changing its way of working in order to get more competitive. Industry wants to get more and more automated in order to reduce production times, increase productivity, improve quality production, at a cheaper cost, to be less wasteful, with less need to have a skilled operative, to be more flexible meaning easy to implement changes and possible to leave the automated process working with little supervision around the clock. Vision in Robotics helps controlling production in a relatively simple form, avoiding a skilled operative to spend his time ‘watching’ the machine doing his job. Moreover, the automatic inspection process does things faster and with improved quality than a human. Robotics Vision and Image Processing tools are the most desired tools for quality control in industry. With the use of one (or more) cameras, and a computer controlling and analysing the extracted images, a software tool can solve a problem in a relatively easy way. An initial investment is needed to buy all the necessary Vision Hardware, but software can be built by using a few existing tools. Instead of making a vision based program from scratch (re-inventing the wheel) to solve a specific problem, it is now possible to use existing image processing tools and build quickly and easily a software solution. These tools work on grey-scale image processing level. These high-level vision software tools do not require that the developer program at the pixel level, which makes the technology accessible even to users with little machine vision experience. To reduce the amount of image to analyse, the user can work on ‘regions of interest’, reducing though the time and space to analyse/store the image. A description of the most important tools are described and its basic principle of functioning is explained. These tools can then be integrated and work together in order to make the full solution
    corecore