2,014 research outputs found

    Multiple 2D self organising map network for surface reconstruction of 3D unstructured data

    Get PDF
    Surface reconstruction is a challenging task in reverse engineering because it must represent the surface which is similar to the original object based on the data obtained. The data obtained are mostly in unstructured type whereby there is not enough information and incorrect surface will be obtained. Therefore, the data should be reorganised by finding the correct topology with minimum surface error. Previous studies showed that Self Organising Map (SOM) model, the conventional surface approximation approach with Non Uniform Rational B-Splines (NURBS) surfaces, and optimisation methods such as Genetic Algorithm (GA), Differential Evolution (DE) and Particle Swarm Optimisation (PSO) methods are widely implemented in solving the surface reconstruction. However, the model, approach and optimisation methods are still suffer from the unstructured data and accuracy problems. Therefore, the aims of this research are to propose Cube SOM (CSOM) model with multiple 2D SOM network in organising the unstructured surface data, and to propose optimised surface approximation approach in generating the NURBS surfaces. GA, DE and PSO methods are implemented to minimise the surface error by adjusting the NURBS control points. In order to test and validate the proposed model and approach, four primitive objects data and one medical image data are used. As to evaluate the performance of the proposed model and approach, three performance measurements have been used: Average Quantisation Error (AQE) and Number Of Vertices (NOV) for the CSOM model while surface error for the proposed optimised surface approximation approach. The accuracy of AQE for CSOM model has been improved to 64% and 66% when compared to 2D and 3D SOM respectively. The NOV for CSOM model has been reduced from 8000 to 2168 as compared to 3D SOM. The accuracy of surface error for the optimised surface approximation approach has been improved to 7% compared to the conventional approach. The proposed CSOM model and optimised surface approximation approach have successfully reconstructed surface of all five data with better performance based on three performance measurements used in the evaluation

    GPUMLib: Deep Learning SOM Library for Surface Reconstruction

    Get PDF
    The evolution of 3D scanning devices and innovation in computer processing power and storage capacity has sparked the revolution of producing big point-cloud datasets. This phenomenon has becoming an integral part of the sophisticated building design process especially in the era of 4th Industrial Revolution. The big point-cloud datasets have caused complexity in handling surface reconstruction and visualization since existing algorithms are not so readily available. In this context, the surface reconstruction intelligent algorithms need to be revolutionized to deal with big point-cloud datasets in tandem with the advancement of hardware processing power and storage capacity. In this study, we propose GPUMLib – deep learning library for self-organizing map (SOM-DLLib) to solve problems involving big point-cloud datasets from 3D scanning devices. The SOM-DLLib consists of multiple layers for reducing and optimizing those big point cloud datasets. The findings show the final objects are successfully reconstructed with optimized neighborhood representation and the performance becomes better as the size of point clouds increases

    3D model reconstruction using neural gas accelerated on GPU

    Get PDF
    In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.This work was partially funded by the Spanish Government DPI2013-40534-R grant

    Parameterization of point-cloud freeform surfaces using adaptive sequential learning RBFnetworks

    Get PDF
    We propose a self-organizing Radial Basis Function (RBF) neural network method for parameterization of freeform surfaces from larger, noisy and unoriented point clouds. In particular, an adaptive sequential learning algorithm is presented for network construction from a single instance of point set. The adaptive learning allows neurons to be dynamically inserted and fully adjusted (e.g. their locations, widths and weights), according to mapping residuals and data point novelty associated to underlying geometry. Pseudo-neurons, exhibiting very limited contributions, can be removed through a pruning procedure. Additionally, a neighborhood extended Kalman filter (NEKF) was developed to significantly accelerate parameterization. Experimental results show that this adaptive learning enables effective capture of global low-frequency variations while preserving sharp local details, ultimately leading to accurate and compact parameterization, as characterized by a small number of neurons. Parameterization using the proposed RBF network provides simple, low cost and low storage solutions to many problems such as surface construction, re-sampling, hole filling, multiple level-of-detail meshing and data compression from unstructured and incomplete range data. Performance results are also presented for comparison

    Growing Neural Gas with Different Topologies for 3D Space Perception

    Get PDF
    Three-dimensional space perception is one of the most important capabilities for an autonomous mobile robot in order to operate a task in an unknown environment adaptively since the autonomous robot needs to detect the target object and estimate the 3D pose of the target object for performing given tasks efficiently. After the 3D point cloud is measured by an RGB-D camera, the autonomous robot needs to reconstruct a structure from the 3D point cloud with color information according to the given tasks since the point cloud is unstructured data. For reconstructing the unstructured point cloud, growing neural gas (GNG) based methods have been utilized in many research studies since GNG can learn the data distribution of the point cloud appropriately. However, the conventional GNG based methods have unsolved problems about the scalability and multi-viewpoint clustering. In this paper, therefore, we propose growing neural gas with different topologies (GNG-DT) as a new topological structure learning method for solving the problems. GNG-DT has multiple topologies of each property, while the conventional GNG method has a single topology of the input vector. In addition, the distance measurement in the winner node selection uses only the position information for preserving the environmental space of the point cloud. Next, we show several experimental results of the proposed method using simulation and RGB-D datasets measured by Kinect. In these experiments, we verified that our proposed method almost outperforms the other methods from the viewpoint of the quantization and clustering errors. Finally, we summarize our proposed method and discuss the future direction on this research

    What's the Situation with Intelligent Mesh Generation: A Survey and Perspectives

    Full text link
    Intelligent Mesh Generation (IMG) represents a novel and promising field of research, utilizing machine learning techniques to generate meshes. Despite its relative infancy, IMG has significantly broadened the adaptability and practicality of mesh generation techniques, delivering numerous breakthroughs and unveiling potential future pathways. However, a noticeable void exists in the contemporary literature concerning comprehensive surveys of IMG methods. This paper endeavors to fill this gap by providing a systematic and thorough survey of the current IMG landscape. With a focus on 113 preliminary IMG methods, we undertake a meticulous analysis from various angles, encompassing core algorithm techniques and their application scope, agent learning objectives, data types, targeted challenges, as well as advantages and limitations. We have curated and categorized the literature, proposing three unique taxonomies based on key techniques, output mesh unit elements, and relevant input data types. This paper also underscores several promising future research directions and challenges in IMG. To augment reader accessibility, a dedicated IMG project page is available at \url{https://github.com/xzb030/IMG_Survey}

    SEGMENTATION OF 3D PHOTOGRAMMETRIC POINT CLOUD FOR 3D BUILDING MODELING

    Get PDF
    3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (> 30 pts/sqm) in combination with aerial RGB orthoimages (~ 10 cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results

    GPUMLib: deep learning SOM library for surface reconstruction

    Get PDF
    The evolution of 3D scanning devices and innovation in computer processing power and storage capacity has sparked the revolution of producing big point-cloud datasets. This phenomenon has becoming an integral part of the sophisticated building design process especially in the era of 4th Industrial Revolution. The big point-cloud datasets have caused complexity in handling surface reconstruction and visualization since existing algorithms are not so readily available. In this context, the surface reconstruction intelligent algorithms need to be revolutionized to deal with big point-cloud datasets in tandem with the advancement of hardware processing power and storage capacity. In this study, we propose GPUMLib - deep learning library for self-organizing map (SOM-DLLib) to solve problems involving big point-cloud datasets from 3D scanning devices. The SOM-DLLib consists of multiple layers for reducing and optimizing those big point cloud datasets. The findings show the final objects are successfully reconstructed with optimized neighborhood representation and the performance becomes better as the size of point clouds increases
    corecore