38 research outputs found

    A Parametrization-Based Surface Reconstruction System for Triangular Mesh Simplification with Application to Large Scale Scenes

    Full text link
    The laser scanner is nowadays widely used to capture the geometry of art, animation maquettes, or large architectural, industrial, and land form models. It thus poses specific problems depending on the model scale. This thesis provides a solution for simplification of triangulated data and for surface reconstruction of large data sets, where feature edges provide an obvious segmentation structure. It also explores a new method for model segmentation, with the goal of applying multiresolution techniques to data sets characterized by curvy areas and the lack of clear demarcation features. The preliminary stage of surface segmentation, which takes as input single or multiple scan data files, generates surface patches which are processed independently. The surface components are mapped onto a two-dimensional domain with boundary constraints, using a novel parametrization weight coefficient. This stage generates valid parameter domain points, which can be fed as arguments to parametric modeling functions or surface approximation schemes. On this domain, our approach explores two types of remeshing. First, we generate points in a regular grid pattern, achieving multiresolution through a flexible grid step, which nevertheless is designed to produce a globally uniform resampling aspect. In this case, for reconstruction, we attempt to solve the open problem of border reconciliation across adjacent domains by retriangulating the border gap between the grid and the fixed irregular border. Alternatively, we straighten the domain borders in the parameter domain and coarsely triangulate the resulting simplified polygons, resampling the base domain triangles in a 1-4 subdivision pattern, achieving multiresolution from the number of subdivision steps. For mesh reconstruction, we use a linear interpolation method based on the original mesh triangles as control points on local planes, using a saved triangle correspondence between the original mesh and the parametric domain. We also use a region-wide approximation method, applied to the parameter grid points, which first generates data-trained control points, and then uses them to obtain the reconstruction values at the resamples. In the grid resampling scheme, due to the border constraints, the reassembly of the segmented, sequentially processed data sets is seamless. In the subdivision scheme, we align adjacent border fragments in the parameter space, and use a region-to-fragment map to achieve the same border reconstruction across two neighboring components. We successfully process data sets up to 1,000,000 points in one pass of our program, and are capable of assembling larger scenes from sequential runs. Our program consists of a single run, without intermediate storage. Where we process large input data files, we fragment the input using a nested application of our segmentation algorithm to reduce the size of the input scenes, and our pipeline reassembles the reconstruction output from multiple data files into a unique view

    Image Processing and Pattern Recognition Applied to Soil Structure

    Get PDF
    This thesis represents a collaborative research between the Department of Electronics & Electrical Engineering and the Department of Civil Engineering, University of Glasgow. The project was initially aimed at development of some theories and techniques of image processing and pattern recognition for the study of soil microstructures. More specifically, the aim was to study the shapes, orientations, and arrangements of soil particles and voids (i.e. pores): these three are very important properties, which are used both for description, recognition and classification of soils, and also for studying the relationships between the soil structures and physical, chemical, geological, geographical, and environmental changes. The work presented here was based principally on a need for analysing the structure of soil as recorded in two-dimensional images which might be conventional photographs, optical micrographs, or electron-micrographs. In this thesis, first a brief review of image processing and pattern recognition and their previous application in the study of soil microstructures is given. Then a convex hull based shape description and classification for soil particles is presented. A new algorithm, SPCH, is proposed for finding the convex hull of either a binary object or a cluster of points in a plane. This algorithm is efficient and reliable. Features of pattern vectors for shape description and classification are obtained from the convex hull and the object. These features are invariant with respect to coordinate rotation, translation, and scaling. The objects can then be classified by any standard feature-space method: here minimum-distance classification was used. Next the orientation analysis of soil particles is described. A new method, Directed Vein, is proposed for the analysis. Another three methods: Convex Hull, Principal Components, and Moments, are also presented. Comparison of the four methods shows that the Directed Vein method appears the fastest; but it also has the special property of estimating an 'internal preferred orientation' whereas the other methods estimate an 'elongation direction'. Fourth, the roundness/sharpness analysis of soil particles is presented. Three new algorithms, referred to as the Centre, Gradient Centre, and Radius methods, all based on the Circular Hough Transform, are proposed. Two traditional Circular Hough Transform algorithms are presented as well. The three new methods were successfully applied to the measurement of the roundness (sharpness of comers) of two-dimensional particles. The five methods were compared from the points of view of memory requirement, speed, and accuracy; and the Radius method appears to be the best for the special topic of sharpness/roundness analysis. Finally the analysis and classification of aggregates of objects is introduced. A new method. Extended Linear Hough Transform, is proposed. In this method, the orientations and locations of the objects are mapped into extended Hough space. The arrangements of the objects within an aggregate are then determined by analysing the data distributions in this space. The aggregates can then be classified using a tree classifier. Taken together, the methods developed or tested here provide a useful toolkit for analysing the shapes, orientation, and aggregation of particles such as those seen in two-dimensional images of soil structure at various scales

    Pattern recognition methods applied to medical imaging: lung nodule detection in computed tomography images

    Get PDF
    Lung cancer is one of the main public health issues in developed countries. The overall 5-year survival rate is only 10−16%, although the mortality rate among men in the United States has started to decrease by about 1.5% per year since 1991 and a similar trend for the male population has been observed in most European countries. By contrast, in the case of the female population, the survival rate is still decreasing, despite a decline in the mortality of young women has been ob- served over the last decade. Approximately 70% of lung cancers are diagnosed at too advanced stages for the treatments to be effective. The five-year survival rate for early-stage lung cancers (stage I), which can reach 70%, is sensibly higher than for cancers diagnosed at more advanced stages. Lung cancer most commonly manifests itself as non-calcified pulmonary nodules. The CT has been shown as the most sensitive imaging modality for the detection of small pulmonary nodules, particularly since the introduction of the multi-detector-row and helical CT technologies. Screening programs based on Low Dose Computed Tomography (LDCT) may be regarded as a promising technique for detecting small, early-stage lung cancers. The efficacy of screening programs based on CT in reducing the mortality rate for lung cancer has not been fully demonstrated yet, and different and opposing opinions are being pointed out on this topic by many experts. However, the recent results obtained by the National Lung Screening Trial (NLST), involving 53454 high risk patients, show a 20% reduction of mortality when the screening program was carried out with the helical CT, rather than with a conventional chest X-ray. LDCT settings are currently recommended by the screening trial protocols. However, it is not trivial in this case to identify small pulmonary nodules,due to the noisier appearance of the images in low-dose CT with respect to the standard-dose CT. Moreover, thin slices are generally used in screening programs, thus originating datasets of about 300 − 400 slices per study. De- pending on the screening trial protocol they joined, radiologists can be asked to identify even very small lung nodules, which is a very difficult and time- consuming task. Lung nodules are rather spherical objects, characterized by very low CT values and/or low contrast. Nodules may have CT values in the same range of those of blood vessels, airway walls, pleura and may be strongly connected to them. It has been demonstrated, that a large percent- age of nodules (20 − 35%) is actually missed in screening diagnoses. To support radiologists in the identification of early-stage pathological objects, about one decade ago, researchers started to develop CAD methods to be applied to CT examinations. Within this framework, two CAD sub-systems are proposed: CAD for internal nodules (CADI), devoted to the identification of small nodules embedded in the lung parenchyma, i.e. Internal Nodules (INs) and CADJP, devoted the identification of nodules originating on the pleura surface, i.e. Juxta-Pleural Nodules (JPNs) respectively. As the training and validation sets may drastically influence the performance of a CAD system, the presented approaches have been trained, developed and tested on different datasets of CT scans (Lung Image Database Consortium (LIDC), ITALUNG − CT) and finally blindly validated on the ANODE09 dataset. The two CAD sub-systems are implemented in the ITK framework, an open source C++ framework for segmentation and registration of medical im- ages, and the rendering of the obtained results are achieved using VTK, a freely available software system for 3D computer graphics, image processing and visualization. The Support Vector Machines (SVMs) are implemented in SVMLight. The two proposed approaches have been developed to detect solid nodules, since the number of Ground Glass Opacity (GGO) contained in the available datasets has been considered too low. This thesis is structured as follows: in the first chapter the basic concepts about CT and lung anatomy are explained. The second chapter deals with CAD systems and their evaluation methods. In the third chapter the datasets used for this work are described. In chapter 4 the lung segmentation algorithm is explained in details, and in chapter 5 and 6 the algorithms to detect internal and juxta-pleural candidates are discussed. In chapter 7 the reduction of false positives findings is explained. In chapter 8 results of the train and validation sessions are shown. Finally in the last chapter the conclusions are drawn

    Connected Attribute Filtering Based on Contour Smoothness

    Get PDF

    Acta Cybernetica : Volume 20. Number 1.

    Get PDF

    Fine Art Pattern Extraction and Recognition

    Get PDF
    This is a reprint of articles from the Special Issue published online in the open access journal Journal of Imaging (ISSN 2313-433X) (available at: https://www.mdpi.com/journal/jimaging/special issues/faper2020)

    Sistemas automáticos de informação e segurança para apoio na condução de veículos

    Get PDF
    Doutoramento em Engenharia MecânicaO objeto principal desta tese é o estudo de algoritmos de processamento e representação automáticos de dados, em particular de informação obtida por sensores montados a bordo de veículos (2D e 3D), com aplicação em contexto de sistemas de apoio à condução. O trabalho foca alguns dos problemas que, quer os sistemas de condução automática (AD), quer os sistemas avançados de apoio à condução (ADAS), enfrentam hoje em dia. O documento é composto por duas partes. A primeira descreve o projeto, construção e desenvolvimento de três protótipos robóticos, incluindo pormenores associados aos sensores montados a bordo dos robôs, algoritmos e arquitecturas de software. Estes robôs foram utilizados como plataformas de ensaios para testar e validar as técnicas propostas. Para além disso, participaram em várias competições de condução autónoma tendo obtido muito bons resultados. A segunda parte deste documento apresenta vários algoritmos empregues na geração de representações intermédias de dados sensoriais. Estes podem ser utilizados para melhorar técnicas já existentes de reconhecimento de padrões, deteção ou navegação, e por este meio contribuir para futuras aplicações no âmbito dos AD ou ADAS. Dado que os veículos autónomos contêm uma grande quantidade de sensores de diferentes naturezas, representações intermédias são particularmente adequadas, pois podem lidar com problemas relacionados com as diversas naturezas dos dados (2D, 3D, fotométrica, etc.), com o carácter assíncrono dos dados (multiplos sensores a enviar dados a diferentes frequências), ou com o alinhamento dos dados (problemas de calibração, diferentes sensores a disponibilizar diferentes medições para um mesmo objeto). Neste âmbito, são propostas novas técnicas para a computação de uma representação multi-câmara multi-modal de transformação de perspectiva inversa, para a execução de correcção de côr entre imagens de forma a obter mosaicos de qualidade, ou para a geração de uma representação de cena baseada em primitivas poligonais, capaz de lidar com grandes quantidades de dados 3D e 2D, tendo inclusivamente a capacidade de refinar a representação à medida que novos dados sensoriais são recebidos.The main object of this thesis is the study of algorithms for automatic information processing and representation, in particular information provided by onboard sensors (2D and 3D), to be used in the context of driving assistance. The work focuses on some of the problems facing todays Autonomous Driving (AD) systems and Advanced Drivers Assistance Systems (ADAS). The document is composed of two parts. The first part describes the design, construction and development of three robotic prototypes, including remarks about onboard sensors, algorithms and software architectures. These robots were used as test beds for testing and validating the developed techniques; additionally, they have participated in several autonomous driving competitions with very good results. The second part of this document presents several algorithms for generating intermediate representations of the raw sensor data. They can be used to enhance existing pattern recognition, detection or navigation techniques, and may thus benefit future AD or ADAS applications. Since vehicles often contain a large amount of sensors of different natures, intermediate representations are particularly advantageous; they can be used for tackling problems related with the diverse nature of the data (2D, 3D, photometric, etc.), with the asynchrony of the data (multiple sensors streaming data at different frequencies), or with the alignment of the data (calibration issues, different sensors providing different measurements of the same object). Within this scope, novel techniques are proposed for computing a multicamera multi-modal inverse perspective mapping representation, executing color correction between images for obtaining quality mosaics, or to produce a scene representation based on polygonal primitives that can cope with very large amounts of 3D and 2D data, including the ability of refining the representation as new information is continuously received
    corecore