11 research outputs found

    Two-Dimensional Gel Electrophoresis Image Registration Using Block-Matching Techniques and Deformation Models

    Get PDF
    [Abstract] Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique—based on successive deformation, search, fitting, filtering, and interpolation stages—is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D–PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms.Galicia. Consellería de Economía e Industria; 10MDS014CTGalicia. Consellería de Economía e Industria; 10SIN105004PRInstituto de Salud Carlos III; PI13/0028

    Learning to Synchronously Imitate Gestures Using Entrainment Effect

    Get PDF
    International audienceSynchronisation and coordination are omnipresent and essential in humans interactions. Because of their unavoidable and unintentional aspect, those phenomena could be the consequences of a low level mechanism: a driving force originating from external stimuli called the entrainment effect. In the light of its importance in interaction and wishing to define new HRI, we suggest to model this entrainment to highlight its efficiency for gesture learning during imitative games and for reducing the computational complexity. We will put forward the capacity of adaptation offered by the entrainment effect. Hence, we present in this paper a neural model for gesture learning by imitation using entrainment effect applied to a NAO robot interacting with a human partner

    Extract Motion from Picture Sequence

    Get PDF
    Optical flow is a concept for estimating motion of objects within a visual representation. Motion is represented as vectors at every pixel in a digital image sequence of two frames which are taken at time t and t + δt respectively. Based on the assumption of the observed brightness (intensity) of any object point is constant over time interval and the movement is small, optical flow equation (OFE) is derived and used in algorithm to calculate optical flow for a particular motion. In order to inject robustness in the algorithm, time-varying uniform illumination and calculation of large range of motion also have been taken into account while formulating the algorithm. The algorithm is used as reference to create a coding input for the MATLAB, which is the software used in this project. As a result, the software generates a separate view of static and moving objects. Further study of the output obtains more accurate data. The experimental process will be carried out using a video as an input to determine whether the project succeeds or need more modifications. The outcome of this project is beneficial for motion analyst to predict the parameter of moving objects, animations, statistics and also robotic eye

    Velocity Estimation for Autonomous Underwater Vehicles using Vision-Based Systems

    Get PDF
    In this dissertation, it is presented a study of a system architecture capable of calculating the linear and angular velocity of an autonomous underwater vehicle, AUV, in real-time, suitable to be used in the control loop of an AUV. The velocity is estimated using computer vision algorithms, optical flow and block matching, keeping in mind the movement characteristics of autonomous underwater vehicles, i.e. maximum velocity and acceleration, regarding these systems as having a slow dynamic. Considering that these computer vision technics are computing intensive tasks, and are not compatible with real-time systems when implemented in microcomputers, this problem is solved through the study of a possible implementation of these technics in a field programmable gate array, FPGA, and microcomputers. The computer vision algorithms studied, for optical flow computation, were Horn-Schunck, Lucas and Kanade, and it's different variations and optimizations, and more simple algorithms as block matching

    Estimation du flux optique en présence d'occultations par une approche TAC

    Get PDF
    Nous proposons une résolution du problème de flux optique en présence d'occultations par une approche multi-résolution basée sur la topologie algébrique calculatoire (TAC). Plusieurs méthodes d'estimation du flux optique sont basées sur les contraintes d'intensité et de continuité spatiale du champ de vitesse et conduisent à la résolution d'une équation aux dérivées partielles (EDP).Nous proposons de résoudre le problème de lissage indésirable des contours d'occultation, produit par la contrainte de continuité spatiale, en exploitant le principe de diffusion non-linéaire basé sur le gradient multispectral du flux optique. En effet, le calcul du flux optique peut être interprété par le phénomène de réaction- diffusion du flux optique et le lissage des contours d'occultation est la conséquence du fait que la diffusion se fait d'une manière linéaire dans toute l'image. Une manière d'éviter ce problème est la modification de la conductivité de la diffusion à chaque pixel selon son appartenance au voisinage d'un contour d'occultation. Cela requiert une mesure de détection précise des contours d'occultation afin de les préserver.Nous montrons que le gradient multispectral du flux optique est une mesure qui convient.Nous utilisons l'alternative aux EDPs que représente l'approche TAC qui exploite les lois globales de la diffusion et les principes d'algèbre topologique afin d'offrir plus de robustesse et de précision dans les calculs.Nous utilisons également le modèle TAC multi-résolution pour résoudre le problème de validité de la contrainte d'intensité limitée à de petits décalages.Nous appliquons ensuite notre algorithme au recalage d'images par le flux optique

    Biologically-Inspired Low-Light Vision Systems for Micro-Air Vehicle Applications

    Get PDF
    Various insect species such as the Megalopta genalis are able to visually stabilize and navigate at light levels in which individual photo-receptors may receive fewer than ten photons per second. They do so in cluttered forest environments with astonishing success while relying heavily on optic flow estimation. Such capabilities are nowhere near being met with current technology, in large part due to limitations of low-light vision systems. This dissertation presents a body of work that enhances the capabilities of visual sensing in photon-limited environments with an emphasis on low-light optic flow detection. We discuss the design and characterization of two optical sensors fabricated using complementary metal-oxide-semiconductor (CMOS) very large scale integration (VLSI) technology. The first is a frame-based, low-light, photon-counting camera module with which we demonstrate 1-D non-directional optic flow detection with fewer than 100 photons/pixel/frame. The second utilizes adaptive analog circuits to improve room-temperature short-wave infrared sensing capabilities. This work demonstrates a reduction in dark current of nearly two orders of magnitude and an improvement in signal-to-noise ratio of nearly 40dB when compared to similar, non-adaptive circuits. This dissertation also presents a novel simulation-based framework that enables benchmarking of optic flow algorithms in photon-limited environments. Using this framework we compare the performance of traditional optic flow processing algorithms to biologically-inspired algorithms thought to be used by flying insects such as the Megalopta genalis. This work serves to provide an understanding of what may be ultimately possible with optic flow sensors in low-light environments and informs the design of future low-light optic flow hardware

    A Methodology to Develop Computer Vision Systems in Civil Engineering: Applications in Material Testing and Fish Tracking

    Get PDF
    [Resumen] La Visión Artificial proporciona una nueva y prometedora aproximación al campo de la Ingeniería Civil, donde es extremadamente importante medir con precisión diferentes procesos. Sin embargo, la Visión Artificial es un campo muy amplio que abarca multitud de técnicas y objetivos, y definir una aproximación de desarrollo sistemática es problemático. En esta tesis se propone una nueva metodología para desarrollar estos sistemas considerando las características y requisitos de la Ingeniería Civil. Siguiendo esta metodología se han desarrollado dos sistemas: Un sistema para la medición de desplazamientos y deformaciones en imágenes de ensayos de resistencia de materiales. Solucionando las limitaciones de los actuales sensores físicos que interfieren con el ensayo y solo proporcionan mediciones en un punto y una dirección determinada. Un sistema para la medición de la trayectoria de peces en escalas de hendidura vertical, con el que se pretende solucionar las carencias en el diseño de escalas obteniendo información sobre el comportamiento de los peces. Estas aplicaciones representan contribuciones significativas en el área, y demuestran que la metodología definida e implementada proporciona un marco de trabajo sistemático y confiable para el desarrollo de sistemas de Visión Artificial en Ingeniería Civil.[Resumo] A Visión Artificial proporciona unha nova e prometedora aproximación ó campo da Enxeñería Civil, onde é extremadamente importante medir con precisión diferentes procesos. Sen embargo, a Visión Artificial é un campo moi amplo que abarca multitude de técnicas e obxectivos, e definir unha aproximación de desenvolvemento sistemática é problemático. En esta tese proponse unha nova metodoloxía para desenvolver estes sistemas considerando as características e requisitos da Enxeñería Civil. Seguindo esta metodoloxía desenvolvéronse dous sistemas: Un sistema para a medición de desprazamentos e deformacións en imaxes de ensaios de resistencia de materiais. Solucionando as limitacións dos actuais sensores físicos que interfiren co ensaio e só proporcionan medicións nun punto e nunha dirección determinada. Un sistema para a medición da traxectoria de peixes en escalas de fenda vertical, co que se pretende solucionar as carencias no deseño de escalas obtendo información sobre o comportamento dos peixes. Estas aplicacións representan contribucións significativas na área, e demostran que a metodoloxía definida e implementada proporciona un marco de traballo sistemático e confiable para o desenvolvemento de sistemas de Visión Artificial en Enxeñería Civil.[Abstract] Computer Vision provides a new and promising approach to Civil Engineering, where it is extremely important to measure with accuracy real world processes. However, Computer Vision is a broad field, involving several techniques and topics, and the task of defining a systematic development approach is problematic. In this thesis a new methodology is carried out to develop these systems attending to the special characteristics and requirements of Civil Engineering. Following this methodology, two systems were developed: A system to measure displacements from real images of material surfaces taken during strength tests. This technique solves the limitation of current physical sensors, which interfere with the assay and which are limited to obtaining measurements in a single point of the material and in a single direction of the movement. A system to measure the trajectory of fishes in vertical slot fishways, whose purpose is to solve current lacks in the design of fishways by providing information of fish behavior. These applications represent significant contributions to the field and show that the defined and implemented methodology provides a systematic and reliable framework to develop a Computer Vision system in Civil Engineering

    Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen

    Get PDF
    This work deals with 3D reconstruction and 3D motion estimation from stereo images using variational methods that are based on dense optical flow. In the first part of the thesis, we will investigate a novel application for dense optical flow, namely the estimation of the fundamental matrix of a stereo image pair. By exploiting the high interdependency between the recovered stereo geometry and the established image correspondences, we propose a coupled refinement of the fundamental matrix and the optical flow as a second contribution, thereby improving the accuracy of both. As opposed to many existing techniques, our joint method does not solve for the camera pose and scene structure separately, but recovers them in a single optimisation step. True to our principle of joint optimisation, we further couple the dense 3D reconstruction of the scene to the estimation of its 3D motion in the final part of this thesis. This is achieved by integrating spatial and temporal information from multiple stereo pairs in a novel model for scene flow computation.Diese Arbeit befasst sich mit der 3D Rekonstruktion und der 3D Bewegungsschätzung aus Stereodaten unter Verwendung von Variationsansätzen, die auf dichten Verfahren zur Berechnung des optischen Flusses beruhen. Im ersten Teil der Arbeit untersuchen wir ein neues Anwendungsgebiet von dichtem optischen Fluss, nämlich die Bestimmung der Fundamentalmatrix aus Stereobildpaaren. Indem wir die Abhängigkeit zwischen der geschätzten Stereogeometrie in Form der Fundamentalmatrix und den berechneten Bildkorrespondenzen geeignet ausnutzen, sind wir in der Lage, im zweiten Teil der Arbeit eine gekoppelte Bestimmung der Fundamentalmatrix und des optischen Flusses vorzuschlagen, die zur einer Erhöhung der Genauigkeit beider Schätzungen führt. Im Gegensatz zu vielen existierenden Verfahren berechnet unser gekoppelter Ansatz dabei die Lage der Kameras und die 3D Szenenstruktur nicht einzeln, sondern bestimmt sie in einem einzigen gemeinsamen Optimierungsschritt. Dem Prinzip der gemeinsamen Schätzung weiter folgend koppeln wir im letzten Teil der Arbeit die dichte 3D Rekonstruktion der Szene zusätzlich mit der Bestimmung der zugehörigen 3D Bewegung. Dies wird durch die Intergation von räumlicher und zeitlicher Information aus mehreren Stereobildpaaren in ein neues Modell zur Szenenflussschätzung realisiert

    Variationelle 3D-Rekonstruktion aus Stereobildpaaren und Stereobildfolgen

    Get PDF
    This work deals with 3D reconstruction and 3D motion estimation from stereo images using variational methods that are based on dense optical flow. In the first part of the thesis, we will investigate a novel application for dense optical flow, namely the estimation of the fundamental matrix of a stereo image pair. By exploiting the high interdependency between the recovered stereo geometry and the established image correspondences, we propose a coupled refinement of the fundamental matrix and the optical flow as a second contribution, thereby improving the accuracy of both. As opposed to many existing techniques, our joint method does not solve for the camera pose and scene structure separately, but recovers them in a single optimisation step. True to our principle of joint optimisation, we further couple the dense 3D reconstruction of the scene to the estimation of its 3D motion in the final part of this thesis. This is achieved by integrating spatial and temporal information from multiple stereo pairs in a novel model for scene flow computation.Diese Arbeit befasst sich mit der 3D Rekonstruktion und der 3D Bewegungsschätzung aus Stereodaten unter Verwendung von Variationsansätzen, die auf dichten Verfahren zur Berechnung des optischen Flusses beruhen. Im ersten Teil der Arbeit untersuchen wir ein neues Anwendungsgebiet von dichtem optischen Fluss, nämlich die Bestimmung der Fundamentalmatrix aus Stereobildpaaren. Indem wir die Abhängigkeit zwischen der geschätzten Stereogeometrie in Form der Fundamentalmatrix und den berechneten Bildkorrespondenzen geeignet ausnutzen, sind wir in der Lage, im zweiten Teil der Arbeit eine gekoppelte Bestimmung der Fundamentalmatrix und des optischen Flusses vorzuschlagen, die zur einer Erhöhung der Genauigkeit beider Schätzungen führt. Im Gegensatz zu vielen existierenden Verfahren berechnet unser gekoppelter Ansatz dabei die Lage der Kameras und die 3D Szenenstruktur nicht einzeln, sondern bestimmt sie in einem einzigen gemeinsamen Optimierungsschritt. Dem Prinzip der gemeinsamen Schätzung weiter folgend koppeln wir im letzten Teil der Arbeit die dichte 3D Rekonstruktion der Szene zusätzlich mit der Bestimmung der zugehörigen 3D Bewegung. Dies wird durch die Intergation von räumlicher und zeitlicher Information aus mehreren Stereobildpaaren in ein neues Modell zur Szenenflussschätzung realisiert

    Coarse to over-fine optical flow estimation

    No full text
    We present a readily applicable way to go beyond the accuracy limits of current optical flow estimators. Modern optical flow algorithms employ the coarse to fine approach. We suggest to upgrade this class of algorithms, by adding over-fine interpolated levels to the pyramid. Theoretical analysis of the coarse to over-fine approach explains its advantages in handling flow-field discontinuities and simulations show its benefit for sub-pixel motion. By applying the suggested technique to various multiscale optical flow algorithms, we reduced the estimation error by 10%-30 % on the common test sequences. Using the coarse to over-fine technique, we obtain optical flow estimation results that are currently the best for benchmark sequences. Key words: optical flow
    corecore