55 research outputs found

    Coupling Vanishing Point Tracking with Inertial Navigation to Estimate Attitude in a Structured Environment

    Get PDF
    This research aims to obtain accurate and stable estimates of a vehicle\u27s attitude by coupling consumer-grade inertial and optical sensors. This goal is pursued by first modeling both inertial and optical sensors and then developing a technique for identifying vanishing points in perspective images of a structured environment. The inertial and optical processes are then coupled to enable each one to aid the other. The vanishing point measurements are combined with the inertial data in an extended Kalman filter to produce overall attitude estimates. This technique is experimentally demonstrated in an indoor corridor setting using a motion profile designed to simulate flight. Through comparison with a tactical-grade inertial sensor, the combined consumer-grade inertial and optical data are shown to produce a stable attitude solution accurate to within 1.5 degrees. A measurement bias is manifested which degrades the accuracy by up to another 2.5 degrees

    A comparative study of edge detection techniques

    Get PDF
    The problem of detecting edges in gray level digital images is considered. A literature survey of the existing methods is presented. Based on the survey, two methods that are well accepted by a majority of investigators are identified. The methods selected are: 1) Laplacian of Gaussian (LoG) operator, and 2) An optimal detector based on maxima in gradient magnitude of a Gaussian-smoothed image. The latter has been proposed by Canny[], and will be referred as Canny\u27s method. The purpose of the thesis is to compare the performance of these popular methods. In order to increase the scope of such comparison, two additional methods are considered. First is one of the simplest methods, based on the first order approximation of the first derivative of the image. This method has the advantage of relatively low amount of computations. Second is an attempt to develop an edge fitting method based on eigenvector least-squared error fitting of an intensity profile. This method is developed with an intent to keep the edge localization errors small. All the four methods are coded and applied on several digital images, actual as well as synthesized. Results show that the LoG method and Canny\u27s method perform quite well in general, and that demonstrates popularity of these methods. On the other hand, even the simplest method of first derivative is found to perform well if applied properly. Based on the results of the comparative study several critical issues related to edge detection are pointed out. Results also indicate feasibility of the proposed method based on eigenvector fit. Improvements and recommendation for further work are made

    Feature detection in grayscale aerial images

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Civil and Environmental Engineering, 1999.Includes bibliographical references (p. 95-96).by Katherine Treash.S.M

    Improvements in IDS: adding functionality to Wazuh

    Get PDF
    Traballo Fin de Grao en Enxeñaría Informática. Curso 2018-2019Cybersecurity nowadays is very complex: there are many sub-fi elds and expert tools and it could be argued that it is impossible to guarantee that any system is totally safe. In this project we put ourselves in the shoes of a system administrator for an enterprise, that wants to improve the security by detecting intrusions in the servers he works on. This is key to decide which technologies and tools we choose in this project

    Machine learning methods for 3D object classification and segmentation

    Get PDF
    Field of study: Computer science.Dr. Ye Duan, Thesis Supervisor.Includes vita."July 2018."Object understanding is a fundamental problem in computer vision and it has been extensively researched in recent years thanks to the availability of powerful GPUs and labelled data, especially in the context of images. However, 3D object understanding is still not on par with its 2D domain and deep learning for 3D has not been fully explored yet. In this dissertation, I work on two approaches, both of which advances the state-of-the-art results in 3D classification and segmentation. The first approach, called MVRNN, is based multi-view paradigm. In contrast to MVCNN which does not generate consistent result across different views, by treating the multi-view images as a temporal sequence, our MVRNN correlates the features and generates coherent segmentation across different views. MVRNN demonstrated state-of-the-art performance on the Princeton Segmentation Benchmark dataset. The second approach, called PointGrid, is a hybrid method which combines points and regular grid structure. 3D points can retain fine details but irregular, which is challenge for deep learning methods. Volumetric grid is simple and has regular structure, but does not scale well with data resolution. Our PointGrid, which is simple, allows the fine details to be consumed by normal convolutions under a coarser resolution grid. PointGrid achieved state-of-the-art performance on ModelNet40 and ShapeNet datasets in 3D classification and object part segmentation.Includes bibliographical references (pages 116-140)

    Image processing for plastic surgery planning

    Get PDF
    This thesis presents some image processing tools for plastic surgery planning. In particular, it presents a novel method that combines local and global context in a probabilistic relaxation framework to identify cephalometric landmarks used in Maxillofacial plastic surgery. It also uses a method that utilises global and local symmetry to identify abnormalities in CT frontal images of the human body. The proposed methodologies are evaluated with the help of several clinical data supplied by collaborating plastic surgeons

    Spin observables in kaon photoproduction from the neutron in a deuterium target with CLAS

    Get PDF
    This work presents the first ever measurements of several polarization observables for the reactions gamman->K0Lambda and gamman->K0Sigma0

    Image understanding and feature extraction for applications in industry and mapping

    Get PDF
    Bibliography: p. 212-220.The aim of digital photogrammetry is the automated extraction and classification of the three dimensional information of a scene from a number of images. Existing photogrammetric systems are semi-automatic requiring manual editing and control, and have very limited domains of application so that image understanding capabilities are left to the user. Among the most important steps in a fully integrated system are the extraction of features suitable for matching, the establishment of the correspondence between matching points and object classification. The following study attempts to explore the applicability of pattern recognition concepts in conjunction with existing area-based methods, feature-based techniques and other approaches used in computer vision in order to increase the level of automation and as a general alternative and addition to existing methods. As an illustration of the pattern recognition approach examples of industrial applications are given. The underlying method is then extended to the identification of objects in aerial images of urban scenes and to the location of targets in close-range photogrammetric applications. Various moment-based techniques are considered as pattern classifiers including geometric invariant moments, Legendre moments, Zernike moments and pseudo-Zernike moments. Two-dimensional Fourier transforms are also considered as pattern classifiers. The suitability of these techniques is assessed. These are then applied as object locators and as feature extractors or interest operators. Additionally the use of fractal dimension to segment natural scenes for regional classification in order to limit the search space for particular objects is considered. The pattern recognition techniques require considerable preprocessing of images. The various image processing techniques required are explained where needed. Extracted feature points are matched using relaxation based techniques in conjunction with area-based methods to 'obtain subpixel accuracy. A subpixel pattern recognition based method is also proposed and an investigation into improved area-based subpixel matching methods is undertaken. An algorithm for determining relative orientation parameters incorporating the epipolar line constraint is investigated and compared with a standard relative orientation algorithm. In conclusion a basic system that can be automated based on some novel techniques in conjunction with existing methods is described and implemented in a mapping application. This system could be largely automated with suitably powerful computers
    • …
    corecore