282 research outputs found
Recommended from our members
Image Understanding and Robotics Research at Columbia University
Over the past year, the research investigations of the Vision/Robotics Laboratory at Columbia University have reflected the interests of its four faculty members, two staff programmers, and 16 Ph.D. students. Several of the projects involve other faculty members in the department or the university, or researchers at AT&T, IBM, or Philips. We list below a summary of our interests and results, together with the principal researchers associated with them. Since it is difficult to separate those aspects of robotic research that are purely visual from those that are vision-like (for example, tactile sensing) or vision-related (for example, integrated vision-robotic systems), we have listed all robotic research that is not purely manipulative. The majority of our current investigations are deepenings of work reported last year; this was the second year of both our basic Image Understanding contract and our Strategic Computing contract. Therefore, the form of this year's report closely resembles last year's. Although there are a few new initiatives, mainly we report the new results we have obtained in the same five basic research areas. Much of this work is summarized on a video tape that is available on request. We also note two service contributions this past year. The Special Issue on Computer Vision of the Proceedings of the IEEE, August, 1988, was co-edited by one of us (John Kender [27]). And, the upcoming IEEE Computer Society Conference on Computer Vision and Pattem Recognition, June, 1989, is co-program chaired by one of us (John Kender [23])
View generated database
This document represents the final report for the View Generated Database (VGD) project, NAS7-1066. It documents the work done on the project up to the point at which all project work was terminated due to lack of project funds. The VGD was to provide the capability to accurately represent any real-world object or scene as a computer model. Such models include both an accurate spatial/geometric representation of surfaces of the object or scene, as well as any surface detail present on the object. Applications of such models are numerous, including acquisition and maintenance of work models for tele-autonomous systems, generation of accurate 3-D geometric/photometric models for various 3-D vision systems, and graphical models for realistic rendering of 3-D scenes via computer graphics
Recommended from our members
Sensing and Control for Robust Grasping with Simple Hardware
Robots can move, see, and navigate in the real world outside carefully structured factories, but they cannot yet grasp and manipulate objects without human intervention. Two key barriers are the complexity of current approaches, which require complicated hardware or precise perception to function effectively, and the challenge of understanding system performance in a tractable manner given the wide range of factors that impact successful grasping. This thesis presents sensors and simple control algorithms that relax the requirements on robot hardware, and a framework to understand the capabilities and limitations of grasping systems.Engineering and Applied Science
Methods for Real-time Visualization and Interaction with Landforms
This thesis presents methods to enrich data modeling and analysis in the geoscience domain with a particular focus on geomorphological applications. First, a short overview of the relevant characteristics of the used remote sensing data and basics of its processing and visualization are provided. Then, two new methods for the visualization of vector-based maps on digital elevation models (DEMs) are presented. The first method uses a texture-based approach that generates a texture from the input maps at runtime taking into account the current viewpoint. In contrast to that, the second method utilizes the stencil buffer to create a mask in image space that is then used to render the map on top of the DEM. A particular challenge in this context is posed by the view-dependent level-of-detail representation of the terrain geometry. After suitable visualization methods for vector-based maps have been investigated, two landform mapping tools for the interactive generation of such maps are presented. The user can carry out the mapping directly on the textured digital elevation model and thus benefit from the 3D visualization of the relief. Additionally, semi-automatic image segmentation techniques are applied in order to reduce the amount of user interaction required and thus make the mapping process more efficient and convenient. The challenge in the adaption of the methods lies in the transfer of the algorithms to the quadtree representation of the data and in the application of out-of-core and hierarchical methods to ensure interactive performance. Although high-resolution remote sensing data are often available today, their effective resolution at steep slopes is rather low due to the oblique acquisition angle. For this reason, remote sensing data are suitable to only a limited extent for visualization as well as landform mapping purposes. To provide an easy way to supply additional imagery, an algorithm for registering uncalibrated photos to a textured digital elevation model is presented. A particular challenge in registering the images is posed by large variations in the photos concerning resolution, lighting conditions, seasonal changes, etc. The registered photos can be used to increase the visual quality of the textured DEM, in particular at steep slopes. To this end, a method is presented that combines several georegistered photos to textures for the DEM. The difficulty in this compositing process is to create a consistent appearance and avoid visible seams between the photos. In addition to that, the photos also provide valuable means to improve landform mapping. To this end, an extension of the landform mapping methods is presented that allows the utilization of the registered photos during mapping. This way, a detailed and exact mapping becomes feasible even at steep slopes
Tele-Autonomous control involving contact
Object localization and its application in tele-autonomous systems are studied. Two object localization algorithms are presented together with the methods of extracting several important types of object features. The first algorithm is based on line-segment to line-segment matching. Line range sensors are used to extract line-segment features from an object. The extracted features are matched to corresponding model features to compute the location of the object. The inputs of the second algorithm are not limited only to the line features. Featured points (point to point matching) and featured unit direction vectors (vector to vector matching) can also be used as the inputs of the algorithm, and there is no upper limit on the number of the features inputed. The algorithm will allow the use of redundant features to find a better solution. The algorithm uses dual number quaternions to represent the position and orientation of an object and uses the least squares optimization method to find an optimal solution for the object's location. The advantage of using this representation is that the method solves for the location estimation by minimizing a single cost function associated with the sum of the orientation and position errors and thus has a better performance on the estimation, both in accuracy and speed, than that of other similar algorithms. The difficulties when the operator is controlling a remote robot to perform manipulation tasks are also discussed. The main problems facing the operator are time delays on the signal transmission and the uncertainties of the remote environment. How object localization techniques can be used together with other techniques such as predictor display and time desynchronization to help to overcome these difficulties are then discussed
\u3cem\u3eGRASP News\u3c/em\u3e, Volume 6, Number 1
A report of the General Robotics and Active Sensory Perception (GRASP) Laboratory, edited by Gregory Long and Alok Gupta
Intelligent gripper design and application for automated part recognition and gripping
Intelligent gripping may be achieved through gripper design, automated part recognition, intelligent algorithm for control of the gripper, and on-line decision-making based on sensory data. A generic framework which integrates sensory data, part recognition, decision-making and gripper control to achieve intelligent gripping based on ABB industrial robot is constructed. The three-fingered gripper actuated by a linear servo actuator designed and developed in this project for precise speed and position control is capable of handling a large variety of objects. Generic algorithms for intelligent part recognition are developed. Edge vector representation is discussed. Object geometric features are extracted. Fuzzy logic is successfully utilized to enhance the intelligence of the system. The generic fuzzy logic algorithm, which may also find application in other fields, is presented. Model-based gripping planning algorithm which is capable of extracting object grasp features from its geometric features and reasoning out grasp model for objects with different geometry is proposed. Manipulator trajectory planning solves the problem of generating robot programs automatically. Object-oriented programming technique based on Visual C++ MFC is used to constitute the system software so as to ensure the compatibility, expandability and modular programming design. Hierarchical architecture for intelligent gripping is discussed, which partitions the robot’s functionalities into high-level (modeling, recognizing, planning and perception) layers, and low-level (sensing, interfacing and execute) layers. Individual system modules are integrated seamlessly to constitute the intelligent gripping system
Integration of 3D vision based structure estimation and visual robot control
Enabling robot manipulators to manipulate and/or recognise arbitrarily placed 3D objects under sensory control is one of the key issues in robotics. Such robot sensors should be capable of providing 3D information about objects in order to accomplish the above mentioned tasks. Such robot sensors should also provide the means for multisensor or multimeasurement integration. Finally, such 3D information should be efficiently used for performing desired tasks.
This work develops a novel computational frame wo rk for solving some of these problems. A vision (camera) sensor is used in conjunction with a robot manipulator, in the frame-work of active vision to estimate 3D structure (3D geometrical model) of a class of objects. Such information is used for the visual robot control, in the frame-work of model based vision.
One part o f this dissertation is devoted to the system calibration. The camera and eye/hand calibration is presented. Several contributions are introduced in this part, intended to improve existing calibration procedures. This results in more efficient and accurate calibrations. Experimental results are presented.
Second part of this work is devoted to the methods of image processing and image representation. Methods for extracting and representing necessary image features comprising vision based measurements are given.
Third part of this dissertation is devoted to the 3D geometrical model reconstruction of a class o f objects (polyhedral objects). A new technique for 3D model reconstruction from an image sequence is introduced. This algorithm estimates a 3D model of an object in terms of 3D straight-line segments (wire-frame model) by integrating pertinent information over an image sequence. The image sequence is obtained from a moving camera mounted on a robot arm. Experimental results are presented.
Fourth part of this dissertation is devoted to the robot visual control. A new visual control strategy is introduced. In particular, the necessary homogeneous transformation matrix for the robot gripper in order to grasp an arbitrarily placed 3D object is estimated. This problem is posed as a problem of 3D displacement (motion) estimation between the reference model of an object and the actual model of the object. Further, the basic algorithm is extended to handle multiple object manipulation and recognition. Experimental results are presented
Design and implementation of robotic control for industrial applications
Background: With the pressing need for increased productivity and delivery of end products of uniform quality, industry is turning more and more to computer-based automation. At the present time, most of industrial automated manufacturing is carried out by specialpurpose machines, designed to perform specific functions in a manufacturing process. The inflexibility and generally high cost of these machines often referred to as hard automation systems, have led to a broad-based interest in the use of robots capable of performing a variety of manufacturing functions in a more flexible working environment and at lower production costs. A robot is a reprogrammable general-purpose manipulator with external sensors that can perform various assembly tasks. A robot may possess intelligence, which is normally due to computer algorithms associated with its controls and sensing systems. Industrial robots are general-purpose, computer-controlled manipulators consisting of several rigid links connected in series by revolute or prismatic joints. Most of today’s industrial robots, though controlled by mini and microcomputers are basically simple positional machines. They execute a given task by playing back a prerecorded or preprogrammed sequence of motion that has been previously guided or taught by the hand-held control teach box. Moreover, these robots are equipped with little or no external sensors for obtaining the information vital to its working environment. As a result robots are used mainly for relatively simple, repetitive tasks. More research effort has been directed in sensory feedback systems, which has resulted in improving the overall performance of the manipulator system. An example of a sensory feedback system would be: a vision Charge-Coupled Device (CCD) system. This can be utilized to manipulate the robot position dependant on the surrounding robot environment (various object profile sizes). This vision system can only be used within the robot movement envelop
- …