10 research outputs found

    Evaluation of automated decisionmaking methodologies and development of an integrated robotic system simulation

    Get PDF
    A generic computer simulation for manipulator systems (ROBSIM) was implemented and the specific technologies necessary to increase the role of automation in various missions were developed. The specific items developed are: (1) capability for definition of a manipulator system consisting of multiple arms, load objects, and an environment; (2) capability for kinematic analysis, requirements analysis, and response simulation of manipulator motion; (3) postprocessing options such as graphic replay of simulated motion and manipulator parameter plotting; (4) investigation and simulation of various control methods including manual force/torque and active compliances control; (5) evaluation and implementation of three obstacle avoidance methods; (6) video simulation and edge detection; and (7) software simulation validation

    A strategy for the visual recognition of objects in an industrial environment.

    Get PDF
    This thesis is concerned with the problem of recognizing industrial objects rapidly and flexibly. The system design is based on a general strategy that consists of a generalized local feature detector, an extended learning algorithm and the use of unique structure of the objects. Thus, the system is not designed to be limited to the industrial environment. The generalized local feature detector uses the gradient image of the scene to provide a feature description that is insensitive to a range of imaging conditions such as object position, and overall light intensity. The feature detector is based on a representative point algorithm which is able to reduce the data content of the image without restricting the allowed object geometry. Thus, a major advantage of the local feature detector is its ability to describe and represent complex object structure. The reliance on local features also allows the system to recognize partially visible objects. The task of the learning algorithm is to observe the feature description generated by the feature detector in order to select features that are reliable over the range of imaging conditions of interest. Once a set of reliable features is found for each object, the system finds unique relational structure which is later used to recognize the objects. Unique structure is a set of descriptions of unique subparts of the objects of interest. The present implementation is limited to the use of unique local structure. The recognition routine uses these unique descriptions to recognize objects in new images. An important feature of this strategy is the transference of a large amount of processing required for graph matching from the recognition stage to the learning stage, which allows the recognition routine to execute rapidly. The test results show that the system is able to function with a significant level of insensitivity to operating conditions; The system shows insensitivity to its 3 main assumptions -constant scale, constant lighting, and 2D images- displaying a degree of graceful degradation when the operating conditions degrade. For example, for one set of test objects, the recognition threshold was reached when the absolute light level was reduced by 70%-80%, or the object scale was reduced by 30%-40%, or the object was tilted away from the learned 2D plane by 300-400. This demonstrates a very important feature of the learning strategy: It shows that the generalizations made by the system are not only valid within the domain of the sampled set of images, but extend outside this domain. The test results also show that the recognition routine is able to execute rapidly, requiring 10ms-500ms (on a PDP11/24 minicomputer) in the special case when ideal operating conditions are guaranteed. (Note: This does not include pre-processing time). This thesis describes the strategy, the architecture and the implementation of the vision system in detail, and gives detailed test results. A proposal for extending the system to scale independent 3D object recognition is also given

    Edge detection for semantically based early visual processing

    Get PDF

    Edge detection using neural network arbitration

    Get PDF
    A human observer is able to recognise and describe most parts of an object by its contour, if this is properly traced and reflects the shape of the object itself. With a machine vision system this recognition task has been approached using a similar technique. This prompted the development of many diverse edge detection algorithms. The work described in this thesis is based on the visual observation that edge maps produced by different algorithms, as the image degrades. Display different properties of the original image. Our proposed objective is to try and improve the edge map through the arbitration between edge maps produced by diverse (in nature, approach and performance) edge detection algorithms. As image processing tools are repetitively applied to similar images we believe the objective can be achieved by a learning process based on sample images. It is shown that such an approach is feasible, using an artificial neural network to perform the arbitration. This is taught from sets extracted from sample images. The arbitration system is implemented upon a parallel processing platform. The performance of the system is presented through examples of diverse types of image. Comparisons with a neural network edge detector (also developed within this thesis) and conventional edge detectors show that the proposed system presents significant advantages

    Edge detection using neural network arbitration

    Get PDF
    A human observer is able to recognise and describe most parts of an object by its contour, if this is properly traced and reflects the shape of the object itself. With a machine vision system this recognition task has been approached using a similar technique. This prompted the development of many diverse edge detection algorithms. The work described in this thesis is based on the visual observation that edge maps produced by different algorithms, as the image degrades. Display different properties of the original image. Our proposed objective is to try and improve the edge map through the arbitration between edge maps produced by diverse (in nature, approach and performance) edge detection algorithms. As image processing tools are repetitively applied to similar images we believe the objective can be achieved by a learning process based on sample images. It is shown that such an approach is feasible, using an artificial neural network to perform the arbitration. This is taught from sets extracted from sample images. The arbitration system is implemented upon a parallel processing platform. The performance of the system is presented through examples of diverse types of image. Comparisons with a neural network edge detector (also developed within this thesis) and conventional edge detectors show that the proposed system presents significant advantages

    Author index—Volumes 1–89

    Get PDF

    Parallel computation in low-level vision

    Get PDF
    This thesis is concerned with problems of using computers to interpret scenes from television camera pictures. In particular, it tackles the problem of interpreting the picture in terms of lines and curves, rather like an artist's line drawing. This is very time consuming if done by a single, serial processor. However, if many processors were used simultaneously it could be done much more rapidly. In this thesis the task of line and curve extraction is expressed in terms of constraints, in a form that is susceptible to parallel computation. Iterative algorithms to perform this task have been designed and tested. They are proved to be convergent and to achieve the computation specified. Some previous work on the design of properly convergent, parallel algorithms has drawn on the mathematics of optimisation by relaxation. This thesis develops the use of these techniques for applying "continuity constraints" in line and curve description. First, the constraints are imposed "almost everywhere" on the grey-tone picture data, in two dimensions. Some "discontinuities" - places where the constraints are not satisfied - remain, and they form the lines and curves required for picture interpretation Secondly, a similar process is applied along each line or curve to segment it. Discontinuities in the angle of the tangent along the line or curve mark the positions of vertices. In each case the process is executed in parallel throughout the picture. It is shown that the specification of such a process as an optimisation problem is non-convex and this means that an optimal solution cannot necessarily be found in a reasonable time A method is developed for efficiently achieving a good sub-optimal solution. A parallel array processor is a large array of processor cells which can act simultaneously, throughout a picture. A software emulator of such a processor array was coded in C and a POP-2 based high level language, PARAPIC, to drive it was written and used to validate the parallel algorithms developed in the thesis It is argued that the scope, in a vision system, of parallel methods such as those exploited in this work is extensive. The implications for the design of hardware to perform low-level vision are discussed and it is suggested that a machine consisting of fewer, more powerful cells than in a parallel array processor would execute the parallel algorithms more efficiently

    The systematic development of a machine vision based milking robot

    Get PDF
    Agriculture involves unique interactions between man, machines, and various elements from nature. Therefore the implementation of advanced technology in agriculture holds different challenges than in other sectors of the economy. This dissertation stems from research into the application of advanced technology in dairying - focusing on the systematic analysis and synthesis of concepts for a robotic milking machine for cows. The main subsystems of the milking robot are identified as a machine perception subsystem and a mechanical manipulator subsystem. The machine perception subsystem consists of one or more sensors and a signal processor; while the manipulator subsystem typically consists of a robot arm; a robot hand; actuators; and a controller. After the evaluation of different sensor concepts in terms of a defined set of technical performance requirements, television cameras are chosen as a suitable sensor concept for a milking robot. Therefore the signal processor is only concerned with image processing techniques. The primary task of the milking robot's image processor is to derive a computerized description of the spatial positions of the endpoints of a cow's four teats, in terms of a pre-defined frame of reference (called the word coordinates ). This process is called scene description ; and based on extensive experimental results, three-dimensional scene description - making use of a stereo-vision set-up - is shown to be feasible for application as part of a milking robot. Different processes are involved in stereo machine vision - such as data reduction, with the minimum loss of Image information (for which the Sobel edge enhancement operator is used); the accurate localisation of target objects in the two stereo images (for which the parabolic Hough transform is used); and correlation of features in the two stereo images. These aspects are all addressed for the milking robot, by means of concept analysis, trade-oft, and experimental verification. From a trade-off, based on a set of performance requirements for the manipulator subsystem, a cartesian robot arm is chosen as a suitable configuration for the milking robot; while sealed direct current servo motors are chosen as a suitable actuator concept. A robot arm and its actuators are designed by means of computer-aided design techniques; and computer simulation results are presented for the dynamic response of the arm and its actuators. A suitable robot hand is also designed - based on systematic trade-oft for different parts of a robot hand. From an analysis of the desired controller functions, and of different control concepts, it is concluded that a positional controller, making use of on-line obstruction avoidance, is required for the milking robot. Because this research project involved systematic concept exploration, there are still some details to be sorted out in a follow-up development phase. The basic principles of a machine vision based milking robot are however established; and the work in this dissertation represents a suitable baseline for further development

    Disseny de hardware específic per a l'extracció de característiques i comparació d'empremtes dactilars.

    Get PDF
    El mètode d'identificació mitjançant empremta dactilar és un dels més fiables que es coneixen i un seriós candidat a ser incorporat a les activitats diàries. En els darrers anys la biometria d'empremta dactilar s'ha anat acostant al gran públic i ja no és estranya la utilització de sistemes automàtics de verificació dactilar per a l'accés a certes instal·lacions.El mercat es dirigeix cap a un tipus de targetes personals que integren un sensor d'empremta dactilar junt a un dispositiu en el que es facin totes les etapes de l'algorisme biomètric. Dins d'aquest context, la tesi busca la integració de sistemes biomètrics y targetes intel·ligents amb l'objectiu d'implementar un "embedded security system" capaç d'evitar possibles usos fraudulents mitjançant la verificació de la identitat del titular a partir de la utilització de la biometria d'empremta dactilar.Tradicionalment, els algorismes utilitzats per a fer l'extracció de característiques d'empremtes dactilars es basen en la successiva aplicació de complexes funcions de processat d'imatge. El desenvolupament d'aquests algorismes es fa pensant en la correcta extracció de les característiques, però fins ara no s'ha pensat en una optimització del cost o de la portabilitat; els sistemes s'han desenvolupat sobre una plataforma amb un ordenador personal, o utilitzant un microprocessador d'elevades prestacions (i cost), o fins i tot fent servir un processador digital de senyal (DSP) específic.En el marc d'aquesta tesi s'ha desenvolupat un algorisme per a l'extracció de les característiques físiques de les empremtes dactilars; el processat, que es fa directament sobre la imatge de l'empremta en escala de grisos, no precisa de multiplicadors ni divisors, ni realitza operacions en coma flotant. Com que la correcta estimació de les direccions de les línies de l'empremta acostuma a ser la part més crítica, i computacionalment més costosa, dels algorismes d'extracció de característiques, també s'ha dissenyat un algorisme específic per a dur a terme aquesta operació.Amb la finalitat d'obtenir un sistema d'extracció en temps real apte per a ser implementat en microprocessadors de baix cost, s'ha fet el codisseny d'un sistema hardware - software. Així, s'han implementat els coprocessadors corresponents a la realització per hardware tant dels algorismes d'estimació de direcció com de la resta de itasques crítiques; aquestes s'han identificat analitzant el perfil d'execució dels algorismes dissenyats.El mètode d'estimació de la direcció que s'ha dissenyat incorpora una novadora optimització de càlcul, que s'adapta a les necessitats específiques de precisió i evita la realització d'operacions d'alt cost computacional. A la orientació calculada se li associa un valor numèric, indicatiu de la fiabilitat de l'estimació, que facilitarà la realització d'una fase prèvia de segmentació de l'empremta, un punt important en el procés d'extracció, i que, habitualment, s'ha estudiat de forma separada al procés d'extracció.Totes aquestes modificacions ens permetran fer un dispositiu electrònic (hardware + software) de dimensions petites, baix cost i alta qualitat en els resultats, tenint-se així la possibilitat d'utilitzar la identificació o l'autentificació d'empremtes dactilars en nous camps d'aplicació.El método de identificación mediante huella dactilar es uno de los más fiables que se conocen y un serio candidato a ser incorporado a la vida cotidiana. En los últimos años la biometría de huella dactilar se ha ido acercando al gran público y ya no es extraña la utilización de sistemas automáticos de verificación dactilar para el acceso a algunas instalaciones.El mercado se encamina hacia un tipo de tarjetas personales que integren un sensor de huella dactilar junto a un dispositivo en el que se lleven a cabo todos los pasos del algoritmo biométrico. Dentro de este contexto, la tesis persigue la integración de sistemas biométricos y tarjetas inteligentes con el objetivo de implementar un "embedded security system" capaz de evitar posibles usos fraudulentos mediante la verificación de la identidad del titular a partir de la utilización de la biometría de huella dactilar.Tradicionalmente, los algoritmos utilizados para realizar la extracción de características de huellas dactilares se basan en la sucesiva aplicación de complicadas funciones de procesado de imagen. El desarrollo de estos algoritmos se realiza pensando en la correcta extracción de las características, pero hasta la fecha no se ha pensado en una optimización del coste o de la portabilidad; los sistemas se han desarrollado sobre una plataforma con un ordenador personal, o empleando un microprocesador de altas prestaciones (y coste), cuando no un procesador digital de señal (DSP) específico.En el marco de esta tesis se ha desarrollado un algoritmo para la extracción de las características físicas de las huellas dactilares; el procesado, que se realiza directamente sobre la imagen de la huella en escala de grises, no precisa de productos ni divisiones ni operaciones en coma flotante. Puesto que la correcta estimación de las direcciones de las líneas de la huella suele ser la parte más crítica, y computacionalmente más costosa, de los algoritmos de extracción de características, también se ha desarrollado un algoritmo específico para realizar esta operación.Con objeto de disponer de un sistema de extracción en tiempo real apto para ser implementado en microprocesadores de bajo coste, se ha realizado el codiseño de un sistema hardware - software. Así, se han implementado los coprocesadores correspondientes a la realización mediante hardware de los algoritmos de estimación de iiidirección así como del resto de tareas críticas; éstas se han identificado analizando el perfil de ejecución de los algoritmos diseñados.El método de estimación de la dirección diseñado incorpora una novedosa optimización de cálculo, que se adapta a las necesidades específicas de precisión y evita la realización de operaciones de elevado coste computacional. A la orientación calculada se le asocia un valor numérico, indicativo de la fiabilidad en la estimación, que va a facilitar la realización de una fase previa de segmentación de la huella, un punto importante en el proceso de extracción, y que, habitualmente, se ha venido estudiando de forma separada al proceso de extracción.Todas estas modificaciones nos permitirán realizar un dispositivo electrónico (hardware + software) de pequeñas dimensiones, bajo coste y alta calidad en los resultados, obteniendo así la posibilidad de la utilización de la identificación o autentificación de huellas dactilares en nuevos campos de aplicación.Fingerprint-based biometrics is one of the more reliable identification methods, and a serious candidate for being used in the daily life. In recent years a lot of new devices incorporate fingerprint biometrics and it is not strange the utilization of automatic fingerprint identification systems for monitoring the access into restricted areas.The society is evolving towards a new kind of smart cards, joining a fingerprint sensor together with a device capable of performing all of the biometric identification steps. In this framework, the thesis focuses in the integration of biometric systems and smart cards; the target is the implementation of an embedded security system, based in fingerprint biometrics, in order to avoid fraudulent accesses by means of identity verification.Traditionally, the algorithms used in fingerprint features extraction have been based in the recursive iteration of complex image processing functions. These algorithms have been designed looking only for the correct feature extraction but, until now, there is not any algorithm designed bearing in mind a cost or a portability optimization; the systems have been developed over a personal computer based platform, or using a high feature (and cost) microprocessor, or over an specific digital signal processing (DSP) device.This work develops a new algorithm for the extraction of the fingerprint physical details (minutiae) directly from a grey scale image; the algorithm does not need any product or division and neither any floating point operation. As the correct estimation of the ridge lines direction usually becomes the most critical step, and computationally most expensive, of the minutiae extraction algorithms, it has also been developed a specific algorithm for this specific task.In order to develop an real-time automatic identification system, fitted to be implemented in low cost microprocessors, it has been carried out the co-design of a hardware - software system. So, the respective coprocessors have been designed: the one related to the hardware implementation of the ridge lines directions estimation and other dedicated to the rest of critical tasks; these have been identified executing the software version of the algorithm and analyzing execution profile.The ridge orientation estimation method introduces an original computing method, which is adapted to the specific precision needs and saves the use of high computational cost operations. A numerical value, indicative of the estimation reliability, is associated to the computed orientation. This value will be used to simplify the execution of a fingerprint segmentation step, previous to the feature extraction. Usually this step has been carried out as an independent part of the process with the consequent increase in the total computational cost.With the presented set of functions and algorithms, and their hardware counterparts (hardware software co-design), it is developed an electronic device with little size, low cost, and high quality results. As a result, the thesis brings new application fields for the personal identification based in fingerprint biometry
    corecore