5 research outputs found

    Computer Vision Measurements for Automated Microrobotic Paper Fiber Studies

    Get PDF
    The mechanical characterization of paper fibers and paper fiber bonds determines the key parameters affecting the mechanical properties of paper. Although bulk measurements from test sheets can give average values, they do not yield any real fiber-level data. The current, state-of-the-art methods for fiberlevel measurements are slow and laborious, requiring delicate manual handling of microscopic samples. There are commercial microrobotic actuators that allow automated or tele-operated manipulation of microscopic objects such as fibers, but it is challenging to acquire the data needed to guide such demanding manipulation. This thesis presents a solution to the illumination problem and computer vision algorithms for obtaining the required data. The solutions are designed for a microrobotic platform that comprises actuators for manipulating the fibers and one or two microscope cameras for visual feedback.The algorithms have been developed both for wet fibers, which can be treated as 2D objects, and for dry fibers and fiber bonds, which are treated as 3D objects. The major innovations in the algorithms are the rules for the micromanipulation of the curly fiber strands and the automated 3D measurements of microscale objects with random geometries. The solutions are validated by imaging and manipulation experiments with wet and dry paper fibers and dry paper fiber bonds. In the imaging experiments, the results are compared with the reference data obtained either from an experienced human or another imaging device. The results show that these solutions provide morphological data about the fibers which is accurate and precise enough to enable automated fiber manipulation. Although this thesis is focused on the manipulation of paper fibers and paper fiber bonds, both the illumination solution and the computer vision algorithms are applicable to other types of fibrous materials

    3D face structure extraction from images at arbitrary poses and under arbitrary illumination conditions

    Get PDF
    With the advent of 9/11, face detection and recognition is becoming an important tool to be used for securing homeland safety against potential terrorist attacks by tracking and identifying suspects who might be trying to indulge in such activities. It is also a technology that has proven its usefulness for law enforcement agencies by helping identifying or narrowing down a possible suspect from surveillance tape on the crime scene, or quickly by finding a suspect based on description from witnesses.In this thesis we introduce several improvements to morphable model based algorithms and make use of the 3D face structures extracted from multiple images to conduct illumination analysis and face recognition experiments. We present an enhanced Active Appearance Model (AAM), which possesses several sub-models that are independently updated to introduce more model flexibility to achieve better feature localization. Most appearance based models suffer from the unpredictability of facial background, which might result in a bad boundary extraction. To overcome this problem we propose a local projection models that accurately locates face boundary landmarks. We also introduce a novel and unbiased cost function that casts the face alignment as an optimization problem, where shape constraints obtained from direct motion estimation are incorporated to achieve a much higher convergence rate and more accurate alignment. Viewing angles are roughly categorized to four different poses, and the customized view-based AAMs align face images in different specific pose categories. We also attempt at obtaining individual 3D face structures by morphing a 3D generic face model to fit the individual faces. Face contour is dynamically generated so that the morphed face looks realistic. To overcome the correspondence problem between facial feature points on the generic and the individual face, we use an approach based on distance maps. With the extracted 3D face structure we study the illumination effects on the appearance based on the spherical harmonic illumination analysis. By normalizing the illumination conditions on different facial images, we extract a global illumination-invariant texture map, which jointly with the extracted 3D face structure in the form of cubic morphing parameters completely encode an individual face, and allow for the generation of images at arbitrary pose and under arbitrary illumination.Face recognition is conducted based on the face shape matching error, texture error and illumination-normalized texture error. Experiments show that a higher face recognition rate is achieved by compensating for illumination effects. Furthermore, it is observed that the fusion of shape and texture information result in a better performance than using either shape or texture information individually.Ph.D., Electrical Engineering -- Drexel University, 200

    Vision dynamique pour la navigation d'un robot mobile

    Get PDF
    Les travaux présentés dans cette thèse concernent l’étude des fonctionnalités visuelles sur des scènes dynamiques et ses applications à la robotique mobile. Ces fonctionnalités visuelles traitent plus précisément du suivi visuel d’objets dans des séquences d’images. Quatre méthodes de suivi visuel ont été étudiées, dont trois ont été développées spécifiquement dans le cadre de cette thèse. Ces méthodes sont : (1) le suivi de contours par un snake, avec deux variantes permettant son application à des séquences d’images couleur ou la prise en compte de contraintes sur la forme de l’objet suivi, (2) le suivi de régions par différences de motifs, (3) le suivi de contours par corrélation 1D, et enfin (4) la méthode de suivi d’un ensemble de points, fondée sur la distance de Hausdorff, développée lors d’une thèse précédente. Ces méthodes ont été analysées pour différentes tâches relatives à la navigation d’un robot mobile; une comparaison dans différents contextes a été effectuée, donnant lieu à une caractérisation des cibles et des conditions pour lesquelles chaque méthode donne de bons résultats. Les résultats de cette analyse sont pris en compte dans un module de planification perceptuelle, qui détermine quels objets (amers plans) le robot doit suivre pour se guider le long d’une trajectoire. Afin de contrôler l’exécution d’un tel plan perceptuel, plusieurs protocoles de collaboration ou d’enchaînement entre méthodes de suivi visuel ont été proposés. Finalement, ces méthodes, ainsi qu’un module de contrôle d’une caméra active (site, azimut, zoom), ont été intégrées sur un robot. Trois expérimentations ont été effectuées: a) le suivi de route en milieu extérieur, b) le suivi de primitives pour la navigation visuelle en milieu intérieur, et c) le suivi d’amers plans pour la navigation fondée sur la localisation explicite du robot. ABSTRACT : The work presented on this thesis concerns the study of visual functionalities over dynamic scenes and their applications to mobile robotics. These visual functionalities consist on visual tracking of objects on image sequences. Four methods of visual tracking has been studied, from which tree of them has been developed specifically for the context of this thesis. These methods are: (1) snakes contours tracking, with two variants, the former, to be able to applying it to a sequence of color images and the latter to consider form constraints of the followed object, (2) the tracking of regions by templates differences, (3) contour tracking by 1D correlation, and (4) the tracking method of a set of points, based on Hausdorff distance, developed on a previous thesis. These methods have been analyzed for different tasks, relatives to mobile robot’s navigation. A comparison for different contexts has been done, given to a characterization of objects and conditions for which each method gives the best results. Results from this analysis has been take into account on a perceptual planification module, that determines which objects (plane landmarks) must be tracked by the robot, to drive it over a trajectory. In order to control the execution of perceptual plan, a lot of collaboration or chaining protocols have been proposed between methods. Finally, these methods and a control module of an active camera (pan, tilt, zoom), has been integrated on a robot. Three experiments have been done: a) road tracking over natural environments, b) primitives tracking for visual navigation over human environments and c) landmark tracking for navigation based on explicit localization of robo
    corecore