3 research outputs found

    A geometrical-based approach to recognise structure of complex interiors

    Get PDF
    3D modelling of building interiors has gained a lot of interest recently, specifically since the rise of Building Information Modeling (BIM). A number of methods have been developed in the past, however most of them are limited to modelling non-complex interiors. 3D laser scanners are the preferred sensor to collect the 3D data, however the cost of state-of-the-art laser scanners are prohibitive to many. Other types of sensors could also be used to generate the 3D data but they have limitations especially when dealing with clutter and occlusions. This research has developed a platform to produce 3D modelling of building interiors while adapting a low-cost, low-level laser scanner to generate the 3D interior data. The PreSuRe algorithm developed here, which introduces a new pipeline in modelling building interiors, combines both novel methods and adapts existing approaches to produce the 3D modelling of various interiors, from sparse room to complex interiors with non-ideal geometrical structure, highly cluttered and occluded. This approach has successfully reconstructed the structure of interiors, with above 96% accuracy, even with high amount of noise data and clutter. The time taken to produce the resulting model is almost real-time, compared to existing techniques which may take hours to generate the reconstruction. The produced model is also equipped with semantic information which differentiates the model from a regular 3D CAD drawing and can be use to assist professionals and experts in related fields

    Cloud To Cloud Registration For 3d Point Data

    Get PDF
    The vast potential of digital representation of objects by large collections of 3D points is being recognized on a global scale and has given rise to the popularity of point cloud data (PCD). 3D imaging sensors provide a means for quickly capturing dense and accurate geospatial information that represent the 3D geometry of objects in a digital environment. Due to spatial and temporal constraints, it is quite common that two or more sets of PCD are obtained to provide full 3D analysis. It is therefore quite essential that all the PCD are referenced to a homogeneous coordinate frame of reference. This homogeneity in coordinates is achieved through a point cloud registration task and it involves determining a set of transformation parameters and applying those parameters to transform one dataset into another reference frame or to a global reference frame. The registration task typically involves the use of targets or other geometric features that are recognizable in the different sets of PCD. The recognition of these features usually involves the use of imagery, either intensity images or true-color images or both. In this dissertation, cloud-to-cloud registration, which is also called surface matching or surface registration is investigated as an alternative registration method, which has potential for improved automation and accuracy. The challenge in cloud-to-cloud registration lies in the fact that PCD are usually unstructured and possess little semantics. Two novel techniques were developed in this dissertation, one for the pairwise registration of PCD and the other for the global registration of PCD. The developed algorithms were evaluated by comparing with popular approaches and improvements in registration accuracy up to four fold were obtained. The improvement obtained may be attributed to some of the novel considerations introduced in this dissertation. The main novel idea is the simultaneous consideration of the stochastic properties of a pair of scans via the symmetric correspondence

    Orientation and integration of images and image blocks with laser scanning data

    Get PDF
    Laser scanning and photogrammetry are methods for effective and accurate measurement and classification of urban and forest areas. Because these methods complement each other, then integration or integrated use brings additional benefits to real-life applications. However, finding tie features between data sets is a challenging task since laser scanning and imagery are far from each other in nature. The aim of this thesis was to create methods for solving relative orientations between laser scanning data and imagery that would assist in near-future applications integrating laser scanning and photogrammetry. Moreover, a further goal was to create methods enabling the use of data acquired from very different perspectives, such as terrestrial and airborne data. To meet these aims, an interactive orientation method enabling the use of single images, stereo images or larger image blocks was developed and tested. The multi-view approach usually has a significant advantage over the use of a single image. After accurate orientation of laser scanning data and imagery, versatile applications become available. Such applications include, e.g., automatic object recognition, accurate classification of individual trees, point cloud densification, automatic classification of land use, system calibration, and generation of photorealistic 3D models. Besides the orientation part, another aim of the research was to investigate how to fuse or use these two data types together in applications. As a result, examples that evaluated the behavior of laser point clouds in both urban and forestry areas, detection and visualization of temporal changes, enhanced data understanding, stereo visualization, multi-source and multi-angle data fusion, point cloud colorizing, and detailed examination of full waveform laser scanning data were given
    corecore