486 research outputs found

    Head motion tracking in 3D space for drivers

    Get PDF
    Ce travail présente un système de vision par ordinateur capable de faire un suivi du mouvement en 3D de la tête d’une personne dans le cadre de la conduite automobile. Ce système de vision par ordinateur a été conçu pour faire partie d'un système intégré d’analyse du comportement des conducteurs tout en remplaçant des équipements et des accessoires coûteux, qui sont utilisés pour faire le suivi du mouvement de la tête, mais sont souvent encombrants pour le conducteur. Le fonctionnement du système est divisé en quatre étapes : l'acquisition d'images, la détection de la tête, l’extraction des traits faciaux, la détection de ces traits faciaux et la reconstruction 3D des traits faciaux qui sont suivis. Premièrement, dans l'étape d'acquisition d'images, deux caméras monochromes synchronisées sont employées pour former un système stéréoscopique qui facilitera plus tard la reconstruction 3D de la tête. Deuxièmement, la tête du conducteur est détectée pour diminuer la dimension de l’espace de recherche. Troisièmement, après avoir obtenu une paire d’images de deux caméras, l'étape d'extraction des traits faciaux suit tout en combinant les algorithmes de traitement d'images et la géométrie épipolaire pour effectuer le suivi des traits faciaux qui, dans notre cas, sont les deux yeux et le bout du nez du conducteur. Quatrièmement, dans une étape de détection des traits faciaux, les résultats 2D du suivi sont consolidés par la combinaison d'algorithmes de réseau de neurones et la géométrie du visage humain dans le but de filtrer les mauvais résultats. Enfin, dans la dernière étape, le modèle 3D de la tête est reconstruit grâce aux résultats 2D du suivi et ceux du calibrage stéréoscopique des caméras. En outre, on détermine les mesures 3D selon les six axes de mouvement connus sous le nom de degrés de liberté de la tête (longitudinal, vertical, latéral, roulis, tangage et lacet). La validation des résultats est effectuée en exécutant nos algorithmes sur des vidéos préenregistrés des conducteurs utilisant un simulateur de conduite afin d'obtenir des mesures 3D avec notre système et par la suite, à les comparer et les valider plus tard avec des mesures 3D fournies par un dispositif pour le suivi de mouvement installé sur la tête du conducteur.This work presents a computer vision module capable of tracking the head motion in 3D space for drivers. This computer vision module was designed to be part of an integrated system to analyze the behaviour of the drivers by replacing costly equipments and accessories that track the head of a driver but are often cumbersome for the user. The vision module operates in five stages: image acquisition, head detection, facial features extraction, facial features detection, and 3D reconstruction of the facial features that are being tracked. Firstly, in the image acquisition stage, two synchronized monochromatic cameras are used to set up a stereoscopic system that will later make the 3D reconstruction of the head simpler. Secondly the driver’s head is detected to reduce the size of the search space for finding facial features. Thirdly, after obtaining a pair of images from the two cameras, the facial features extraction stage follows by combining image processing algorithms and epipolar geometry to track the chosen features that, in our case, consist of the two eyes and the tip of the nose. Fourthly, in a detection stage, the 2D tracking results are consolidated by combining a neural network algorithm and the geometry of the human face to discriminate erroneous results. Finally, in the last stage, the 3D model of the head is reconstructed from the 2D tracking results (e.g. tracking performed in each image independently) and calibration of the stereo pair. In addition 3D measurements according to the six axes of motion known as degrees of freedom of the head (longitudinal, vertical and lateral, roll, pitch and yaw) are obtained. The validation of the results is carried out by running our algorithms on pre-recorded video sequences of drivers using a driving simulator in order to obtain 3D measurements to be compared later with the 3D measurements provided by a motion tracking device installed on the driver’s head

    A parallel windowing approach to the Hough transform for line segment detection

    Get PDF
    In the wide range of image processing and computer vision problems, line segment detection has always been among the most critical headlines. Detection of primitives such as linear features and straight edges has diverse applications in many image understanding and perception tasks. The research presented in this dissertation is a contribution to the detection of straight-line segments by identifying the location of their endpoints within a two-dimensional digital image. The proposed method is based on a unique domain-crossing approach that takes both image and parameter domain information into consideration. First, the straight-line parameters, i.e. location and orientation, have been identified using an advanced Fourier-based Hough transform. As well as producing more accurate and robust detection of straight-lines, this method has been proven to have better efficiency in terms of computational time in comparison with the standard Hough transform. Second, for each straight-line a window-of-interest is designed in the image domain and the disturbance caused by the other neighbouring segments is removed to capture the Hough transform buttery of the target segment. In this way, for each straight-line a separate buttery is constructed. The boundary of the buttery wings are further smoothed and approximated by a curve fitting approach. Finally, segments endpoints were identified using buttery boundary points and the Hough transform peak. Experimental results on synthetic and real images have shown that the proposed method enjoys a superior performance compared with the existing similar representative works

    The Three-Dimensional Circumstellar Environment of SN 1987A

    Full text link
    We present the detailed construction and analysis of the most complete map to date of the circumstellar environment around SN 1987A, using ground and space-based imaging from the past 16 years. PSF-matched difference-imaging analyses of data from 1988 through 1997 reveal material between 1 and 28 ly from the SN. Careful analyses allows the reconstruction of the probable circumstellar environment, revealing a richly-structured bipolar nebula. An outer, double-lobed ``Peanut,'' which is believed to be the contact discontinuity between red supergiant and main sequence winds, is a prolate shell extending 28 ly along the poles and 11 ly near the equator. Napoleon's Hat, previously believed to be an independent structure, is the waist of this Peanut, which is pinched to a radius of 6 ly. Interior to this is a cylindrical hourglass, 1 ly in radius and 4 ly long, which connects to the Peanut by a thick equatorial disk. The nebulae are inclined 41\degr south and 8\degr east of the line of sight, slightly elliptical in cross section, and marginally offset west of the SN. From the hourglass to the large, bipolar lobes, echo fluxes suggest that the gas density drops from 1--3 cm^{-3} to >0.03 cm^{-3}, while the maximum dust-grain size increases from ~0.2 micron to 2 micron, and the Si:C dust ratio decreases. The nebulae have a total mass of ~1.7 Msun. The geometry of the three rings is studied, suggesting the northern and southern rings are located 1.3 and 1.0 ly from the SN, while the equatorial ring is elliptical (b/a < 0.98), and spatially offset in the same direction as the hourglass.Comment: Accepted for publication in the ApJ Supplements. 38 pages in apjemulate format, with 52 figure

    Bioinspired symmetry detection on resource limited embedded platforms

    Get PDF
    This work is inspired by the vision of flying insects which enables them to detect and locate a set of relevant objects with remarkable effectiveness despite very limited brainpower. The bioinspired approach worked out here focuses on detection of symmetric objects to be performed by resource-limited embedded platforms such as micro air vehicles. Symmetry detection is posed as a pattern matching problem which is solved by an approach based on the use of composite correlation filters. Two variants of the approach are proposed, analysed and tested in which symmetry detection is cast as 1) static and 2) dynamic pattern matching problems. In the static variant, images of objects are input to two dimentional spatial composite correlation filters. In the dynamic variant, a video (resulting from platform motion) is input to a composite correlation filter of which its peak response is used to define symmetry. In both cases, a novel method is used for designing the composite filter templates for symmetry detection. This method significantly reduces the level of detail which needs to be matched to achieve good detection performance. The resulting performance is systematically quantified using the ROC analysis; it is demonstrated that the bioinspired detection approach is better and with a lower computational cost compared to the best state-of-the-art solution hitherto available

    Design and application of an automated system for camera photogrammetric calibration

    Get PDF
    This work presents the development of a novel Automatic Photogrammetric Camera Calibration System (APCCS) that is capable of calibrating cameras, regardless of their Field of View (FOV), resolution and sensitivity spectrum. Such calibrated cameras can, despite lens distortion, accurately determine vectors in a desired reference frame for any image coordinate, and map points in the reference frame to their corresponding image coordinates. The proposed system is based on a robotic arm which presents an interchangeable light source to the camera in a sequence of known discrete poses. A computer captures the camera's image for each robot pose and locates the light source centre in the image for each point in the sequence. Careful selection of the robot poses allows cost functions dependant on the captured poses and light source centres to be formulated for each of the desired calibration parameters. These parameters are the Brown model parameters to convert from the distorted to the undistorted image (and vice versa), the focal length, and the camera's pose. The pose is split into the camera pose relative to its mount and the mount's pose relative to the reference frame to aid subsequent camera replacement. The parameters that minimise each cost function are deter- mined via a combination of coarse global and fine local optimisation techniques: genetic algorithms and the Leapfrog algorithm, respectively. The real world applicability of the APCCS is assessed by photogrammetrically stitching cameras of differing resolutions, FOVs and spectra into a single multi- spectral panorama. The quality of these panoramas are deemed acceptable after both subjective and quantitative analyses. The quantitative analysis compares the stitched position of matched image feature pairs found with the Shape Invariant Feature Tracker (SIFT) and Speeded Up Robust Features (SURF) algorithms and shows the stitching to be accurate to within 0.3°. The noise sensitivity of the APCCS is assessed via the generation of synthetic light source centres and robot poses. The data is realistically created for a hy- pothetical camera pair via the corruption of ideal data using seven noise sources emulating the robot movement, camera mounting and image processing errors. The calibration and resulting stitching accuracies are shown to be largely independent of the noise magnitudes in the operational ranges tested. The APCCS is thus found to be robust to noise. The APCCS is shown to meet all its requirements by determining a novel combination of calibration parameters for cameras regardless of their properties in a noise resilient manner

    Mathematical Morphology for Quantification in Biological & Medical Image Analysis

    Get PDF
    Mathematical morphology is an established field of image processing first introduced as an application of set and lattice theories. Originally used to characterise particle distributions, mathematical morphology has gone on to be a core tool required for such important analysis methods as skeletonisation and the watershed transform. In this thesis, I introduce a selection of new image analysis techniques based on mathematical morphology. Utilising assumptions of shape, I propose a new approach for the enhancement of vessel-like objects in images: the bowler-hat transform. Built upon morphological operations, this approach is successful at challenges such as junctions and robust against noise. The bowler-hat transform is shown to give better results than competitor methods on challenging data such as retinal/fundus imagery. Building further on morphological operations, I introduce two novel methods for particle and blob detection. The first of which is developed in the context of colocalisation, a standard biological assay, and the second, which is based on Hilbert-Edge Detection And Ranging (HEDAR), with regard to nuclei detection and counting in fluorescent microscopy. These methods are shown to produce accurate and informative results for sub-pixel and supra-pixel object counting in complex and noisy biological scenarios. I propose a new approach for the automated extraction and measurement of object thickness for intricate and complicated vessels, such as brain vascular in medical images. This pipeline depends on two key technologies: semi-automated segmentation by advanced level-set methods and automatic thickness calculation based on morphological operations. This approach is validated and results demonstrating the broad range of challenges posed by these images and the possible limitations of this pipeline are shown. This thesis represents a significant contribution to the field of image processing using mathematical morphology and the methods within are transferable to a range of complex challenges present across biomedical image analysis
    • …
    corecore