6,977 research outputs found

    Feature Sensitive Three-Dimensional Point Cloud Simplification using Support Vector Regression

    Get PDF
    Contemporary three-dimensional (3D) scanning devices are characterized by high speed and resolution. They provide dense point clouds that contain abundant data about scanned objects and require computationally intensive and time consuming processing. On the other hand, point clouds usually contain a large amount of redundant data that carry little or no additional information about scanned object geometry. To facilitate further analysis and extraction of relevant information from point cloud, as well as faster transfer of data between different computational devices, it is rational to carry out its simplification at an early stage of the processing. However, the reduction of data during simplification has to ensure high level of information contents preservation; simplification has to be feature sensitive. In this paper we propose a method for feature sensitive simplification of 3D point clouds that is based on epsilon insensitive support vector regression (epsilon-SVR). The proposed method is intended for structured point clouds. It exploits the flatness property of epsilon-SVR for effective recognition of points in high curvature areas of scanned lines. The points from these areas are kept in simplified point cloud along with a reduced number of points from flat areas. In addition, the proposed method effectively detects the points in the vicinity of sharp edges without additional processing. Proposed simplification method is experimentally verified using three real world case studies. To estimate the quality of the simplification, we employ non-uniform rational b-splines fitting to initial and reduced scan lines

    Feature Sensitive Three-Dimensional Point Cloud Simplification using Support Vector Regression

    Get PDF
    Contemporary three-dimensional (3D) scanning devices are characterized by high speed and resolution. They provide dense point clouds that contain abundant data about scanned objects and require computationally intensive and time consuming processing. On the other hand, point clouds usually contain a large amount of redundant data that carry little or no additional information about scanned object geometry. To facilitate further analysis and extraction of relevant information from point cloud, as well as faster transfer of data between different computational devices, it is rational to carry out its simplification at an early stage of the processing. However, the reduction of data during simplification has to ensure high level of information contents preservation; simplification has to be feature sensitive. In this paper we propose a method for feature sensitive simplification of 3D point clouds that is based on epsilon insensitive support vector regression (epsilon-SVR). The proposed method is intended for structured point clouds. It exploits the flatness property of epsilon-SVR for effective recognition of points in high curvature areas of scanned lines. The points from these areas are kept in simplified point cloud along with a reduced number of points from flat areas. In addition, the proposed method effectively detects the points in the vicinity of sharp edges without additional processing. Proposed simplification method is experimentally verified using three real world case studies. To estimate the quality of the simplification, we employ non-uniform rational b-splines fitting to initial and reduced scan lines

    Experiments on Surface Reconstruction for Partially Submerged Marine Structures

    Get PDF
    Over the past 10 years, significant scientific effort has been dedicated to the problem of three-dimensional (3-D) surface reconstruction for structural systems. However, the critical area of marine structures remains insufficiently studied. The research presented here focuses on the problem of 3-D surface reconstruction in the marine environment. This paper summarizes our hardware, software, and experimental contributions on surface reconstruction over the past few years (2008–2011). We propose the use of off-the-shelf sensors and a robotic platform to scan marine structures both above and below the waterline, and we develop a method and software system that uses the Ball Pivoting Algorithm (BPA) and the Poisson reconstruction algorithm to reconstruct 3-D surface models of marine structures from the scanned data. We have tested our hardware and software systems extensively in Singapore waters, including operating in rough waters, where water currents are around 1–2 m/s. We present results on construction of various 3-D models of marine structures, including slowly moving structures such as floating platforms, moving boats, and stationary jetties. Furthermore, the proposed surface reconstruction algorithm makes no use of any navigation sensor such as GPS, a Doppler velocity log, or an inertial navigation system.Singapore-MIT Alliance for Research and Technology. Center for Environmental Sensing and Modelin

    An Integrated Procedure to Assess the Stability of Coastal Rocky Cliffs: From UAV Close-Range Photogrammetry to Geomechanical Finite Element Modeling

    Get PDF
    The present paper explores the combination of unmanned aerial vehicle (UAV) photogrammetry and three-dimensional geomechanical modeling in the investigation of instability processes of long sectors of coastal rocky cliffs. The need of a reliable and detailed reconstruction of the geometry of the cliff surfaces, beside the geomechanical characterization of the rock materials, could represent a very challenging requirement for sub-vertical coastal cliffs overlooking the sea. Very often, no information could be acquired by alternative surveying methodologies, due to the absence of vantage points, and the fieldwork could pose a risk for personnel. The case study is represented by a 600 m long sea cliff located at Sant\u2019Andrea (Melendugno, Apulia, Italy). The cliff is characterized by a very complex geometrical setting, with a suggestive alternation of 10 to 20 m high vertical walls, with frequent caves, arches and rock-stacks. Initially, the rocky cliff surface was reconstructed at very fine spatial resolution from the combination of nadir and oblique images acquired by unmanned aerial vehicles. Successively, a limited area has been selected for further investigation. In particular, data refinement/decimation procedure has been assessed to find a convenient three-dimensional model to be used in the finite element geomechanical modeling without loss of information on the surface complexity. Finally, to test integrated procedure, the potential modes of failure of such sector of the investigated cliff were achieved. Results indicate that the most likely failure mechanism along the sea cliff examined is represented by the possible propagation of shear fractures or tensile failures along concave cliff portions or over-hanging due to previous collapses or erosion of the underlying rock volumes. The proposed approach to the investigation of coastal cliff stability has proven to be a possible and flexible tool in the rapid and highly-automated investigation of hazards to slope failure in coastal areas

    The combination of geomatic approaches and operational modal analysis to improve calibration of finite element models: a case of study in Saint Torcato church (Guimarães, Portugal)

    Get PDF
    This paper present a set of procedures based on laser scanning, photogrammetry (Structure from Motion) and operational modal analysis in order to obtain accurate numeric models which allows identigying architectural complications that arise in historical buildings. In addition, themethod includes tools that facilitate building-damage monitoring tasks. All of these aimed to obtain robust basis for numerical analysis of the actual behavior and monitoring task. This case study seeks to validate said methodologies, using as an example the case of Saint Torcato Church, located in Guimãres, Portugal

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Automatic tolerance inspection through Reverse Engineering: a segmentation technique for plastic injection moulded parts

    Get PDF
    This work studies segmentations procedures to recognise features in a Reverse Engineering (RE) application that is oriented to computer-aided tolerance inspection of injection moulding die set-up, necessary to manufacture electromechanical components. It will discuss all steps of the procedures, from the initial acquisition to the final measure data management, but specific original developments will be focused on the RE post-processing method, that should solve the problem related to the automation of the surface recognition and then of the inspection process. As it will be explained in the first two Chapters, automation of the inspection process pertains, eminently, to feature recognition after the segmentation process. This work presents a voxel-based approach with the aim of reducing the computation efforts related to tessellation and curvature analysis, with or without filtering. In fact, a voxel structure approximates the shape through parallelepipeds that include small sub-set of points. In this sense, it represents a filter, since the number of voxels is less than the total number of points, but also a local approximation of the surface, if proper fitting models are applied. Through sensitivity analysis and industrial applications, limits and perspectives of the proposed algorithms are discussed and validated in terms of accuracy and save of time. Validation case-studies are taken from real applications made in ABB Sace S.p.A., that promoted this research. Plastic injection moulding of electromechanical components has a time-consuming die set-up. It is due to the necessity of providing dies with many cavities, which during the cooling phase may present different stamping conditions, thus defects that include lengths outside their dimensional tolerance, and geometrical errors. To increase the industrial efficiency, the automation of the inspection is not only due to the automatic recognition of features but also to a computer-aided inspection protocol (path planning and inspection data management). For this reason, also these steps will be faced, as the natural framework of the thesis research activity. The work structure concerns with six chapters. In Chapter 1, an introduction to the whole procedure is presented, focusing on reasons and utilities of the application of RE techniques in industrial engineering. Chapter 2 analyses acquisition issues and methods that are related to our application, describing: (a) selected hardware; (b) adopted strategy related to the cloud of point acquisition. In Chapter 3, the proposed RE post-processing is described together with a state of art about data segmentation and surface reconstruction. Chapter 4 discusses the proposed algorithms through sensitivity studies concerning thresholds and parameters utilised in segmentation phase and surface reconstruction. Chapter 5 explains briefly the inspection workflow, PDM requirements and solution, together with a preliminary assessing of measures and their reliability. These three chapters (3, 4 and 5) report final sections, called “Discussion”, in which specific considerations are given. Finally, Chapter 6 gives examples of the proposed segmentation technique in the framework of the industrial applications, through specific case studies

    One-shot Feature-Preserving Point Cloud Simplification with Gaussian Processes on Riemannian Manifolds

    Full text link
    The processing, storage and transmission of large-scale point clouds is an ongoing challenge in the computer vision community which hinders progress in the application of 3D models to real-world settings, such as autonomous driving, virtual reality and remote sensing. We propose a novel, one-shot point cloud simplification method which preserves both the salient structural features and the overall shape of a point cloud without any prior surface reconstruction step. Our method employs Gaussian processes with kernels defined on Riemannian manifolds, allowing us to model the surface variation function across any given point cloud. A simplified version of the original cloud is obtained by sequentially selecting points using a greedy sparsification scheme. The selection criterion used for this scheme ensures that the simplified cloud best represents the surface variation of the original point cloud. We evaluate our method on several benchmark datasets, compare it to a range of existing methods and show that our method is competitive both in terms of empirical performance and computational efficiency.Comment: 10 pages, 5 figure
    corecore