19 research outputs found
Automatic Inspection of Aeronautical Mechanical Assemblies by Matching the 3D CAD Model and Real 2D Images
International audienceIn the aviation industry, automated inspection is essential for ensuring quality of production. It allows acceleration of procedures for quality control of parts or mechanical assemblies. As a result, the demand of intelligent visual inspection systems aimed at ensuring high quality in production lines is increasing. In this work, we address a very common problem in quality control. The problem is verification of presence of the correct part and verification of its position. We address the problem in two parts: first, automatic selection of informative viewpoints before the inspection process is started (offline preparation of the inspection) and, second, automatic treatment of the acquired images from said viewpoints by matching them with information in 3D CAD models is launched. We apply this inspection system for detecting defects on aeronautical mechanical assemblies with the aim of checking whether all the subparts are present and correctly mounted. The system can be used during manufacturing or maintenance operations. The accuracy of the system is evaluated on two kinds of platform. One is an autonomous navigation robot, and the other one is a handheld tablet. The experimental results show that our proposed approach is accurate and promising for industrial applications with possibility for real-time inspection
Rackham: An Interactive Robot-Guide
International audienceRackham is an interactive robot-guide that has been used in several places and exhibitions. This paper presents its design and reports on results that have been obtained after its deployment in a permanent exhibition. The project is conducted so as to incrementally enhance the robot functional and decisional capabilities based on the observation of the interaction between the public and the robot. Besides robustness and efficiency in the robot navigation abilities in a dynamic environment, our focus was to develop and test a methodology to integrate human-robot interaction abilities in a systematic way. We first present the robot and some of its key design issues. Then, we discuss a number of lessons that we have drawn from its use in interaction with the public and how that will serve to refine our design choices and to enhance robot efficiency and acceptability
Suivi visuel par filtrage particulaire. Application Ă l'interaction homme-robot
Nowadays, the major challenge of Robotics is certainly the personal robot which is able to be of use to human. Many researches in this field are axed on the development of autonomous robots intended to evolve in large human environments. Obviously, this perspective shows the problem of the interaction and the relation between men and robots. Indeed, during the navigation, the robot must be able to detect humans presence in its vicinity and to take them into account, by explicit manner, in order to avoid them or to let them pass: the goal is to facilitate and to make safe their displacements. Moreover, it must have capacities of interaction such as gestures' recognition which allow Men to communicate with him. This thesis is focused more specifically on the detection and the tracking of people and also on the recognition of elementary gestures from video stream of a color camera embeded on the robot. Particle filter is well suited to this context. It allows to avoid any restrictive assumption about the probabilities distributions which are throw in the characterization of the problem. Moreover, this formalism enables a straight combination/fusion of several measurement cues. Despite of this observation, particle filter data fusion seems to us to be little exploited and often confined to a relatively restricted number of visual cues. We propose various filtering strategies where visual information such as shape, color and motion are taken into account in the importance function and the measurement model. We compare and evaluate these filtering strategies in order to show which combination of visual cues and particle filter algorithm are more suitable to the interaction modalities that we consider for our tour-robot which is supposed to hail visitors, to interact with them and to guide them. Our last contribution relates to the recognition of symbolic gestures which enable to communicate with the robot. An efficient particle filter strategy is proposed in order to trac k the hand and to recognize at the same time its configuration and gesture dynamic in video stream.Un défi majeur de la Robotique aujourd'hui est sans doute celui du robot personnel, capable de rendre service à l'Homme. De nombreux travaux de recherche dans le domaine sont axés sur le développement de robots autonomes destinés à évoluer dans des environnements de grandes dimensions en présence de public. Cette perspective pose naturellement le problème de l'interaction et de la relation entre l'Homme et le robot. En effet, lors de sa navigation, le robot doit être capable de détecter et de prendre en compte de manière explicite la présence de personnes dans son voisinage pour les éviter ou leur céder le passage, le but étant de faciliter et de sécuriser leur déplacements. De plus, il doit disposer de capacités d'interaction telles que la reconnaissance de gestes permettant à l'Homme de communiquer avec lui. Cette thèse porte plus spécifiquement sur la détection et le suivi de personnes ainsi que la reconnaissance de gestes élémentaires à partir du flot vidéo d'une caméra couleur embarquée sur le robot. Le filtrage particulaire est très adapté à ce contexte. Il permet de s'affranchir de toute hypothèse restrictive quant aux distributions de probabilités entrant en jeu dans la caractérisation du problème. De plus, ce formalisme permet de combiner/fusionner aisément différentes sources de mesures. Malgré ce constat, la fusion de données par filtrage particulaire nous semble assez peu exploitée et souvent confinée à un nombre relativement restreint de primitives visuelles. Nous proposons différents schémas de filtrage, où l'information visuelle est prise en compte dans les fonctions d'importance et de vraisemblance au moyen de primitives forme, couleur et mouvement image. Nous évaluons alors quelles combinaisons de primitives visuelles et d'algorithmes de filtrage répondent au mieux aux modalités d'interaction envisagées pour notre robot "guide de musée", qui est censé interpeller les visiteurs, interagir avec eux et les guider. Notre dernière contribution porte sur la reconnaissance de gestes symboliques permettant de communiquer avec le robot. Une stratégie de filtrage particulaire efficace est proposée afin de suivre et reconnaître simultanément des configurations de la main et des dynamiques gestuelles dans le flot vidéo
Data fusion and eigenface based tracking dedicated to a Tour-Guide Robot
This article presents a key-scenario of H/R interaction for our tour-guide robot. Given this scenario, three visual modalities, the robot deals with, have been outlined, namely the "search of visitors" attending the exhibition, the "proximal interaction" through the robot interface and the "guidance mission". The paper focuses on the two last ones which involves face recognition and visual data fusion in a particle filtering framework. Evaluations on key-sequences in a human centred environment show the tracker robustness to background clutter, sporadic occlusions and group of persons. The tracker is able to cope with target loss by detecting and re-initializing automatically thanks to the face recognition outcome. Moreover, the multi-cues association proved to be more robust to clutter than any of the cues individually
Data fusion and eigenface based tracking dedicated to a Tour-Guide Robot
This article presents a key-scenario of H/R interaction for our tour-guide robot. Given this scenario, three visual modalities, the robot deals with, have been outlined, namely the "search of visitors" attending the exhibition, the "proximal interaction" through the robot interface and the "guidance mission". The paper focuses on the two last ones which involves face recognition and visual data fusion in a particle filtering framework. Evaluations on key-sequences in a human centred environment show the tracker robustness to background clutter, sporadic occlusions and group of persons. The tracker is able to cope with target loss by detecting and re-initializing automatically thanks to the face recognition outcome. Moreover, the multi-cues association proved to be more robust to clutter than any of the cues individually
An innovative hand-held vision-based digitizing system for 3D modelling
International audienceWe describe a new hand-held 3D modelling device using vision and inertialmeasurements. Our system allows fast and accurate acquisition of the ge-ometry and appearance information of any 3D object. We focused our worktowards an easy manipulation and operating condition.Our methods allow automatic registration with no preparation of thescene (i.e. no markers) even when the object is moved between two acquisitions.In this paper, the design of the system and the developed methods for itsuse are presented. The system has been evaluated, qualitatively and quan-titatively, using reference measurements provided by commercial scanningdevices. The results show that this new hand-held scanning device is reallycompetitive for modelling any 3D object
Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
International audienceStructured light methods achieve 3D modelling by observing with a camera system, a known pattern projected on the scene. The main drawback of single projection structured light methods is that moving the projector changes significatively the appearance of the scene at every acquisition time. Classical multi-view stereovision approaches based on the appearance matching are then not useable. The presented work is based on a two-cameras and one single slide projector system embedded in a hand-held device for industrial applications (reverse engineering, dimensional control, etc). We propose a method to achieve multi-view modelling for camera pose and surface reconstruction estimation in a joint process. The proposed method is based on the extension of a stereo-correlation criterion. Acquisitions are linked through a generalized expression of local homographies. The constraints brought by this formulation allow an accurate estimation of the modelling parameters for dense reconstruction of the scene and improve the result when dealing with detailed or sharp objects, compared to pairwise stereovision methods
Inspection automatisée d'assemblages mécaniques : vers une approche couplée vision 2D / vision 3D
Cet article propose une méthodologie d'inspection automatisée d'assemblages mécaniques basée sur l'utilisation d'un bras manipulateur, équipé en tête d'effecteur d'un capteur de vision artificielle. La méthodologie d'inspection proposée dans cet article se base sur le couplage d'informations 2D et 3D, pour tirer profit de la rapidité de l'analyse en 2D et de la complétude des données 3D
Shape measurement using a new 3D-DIC algorithm that preserves sharp edges
International audienc