18 research outputs found

    Simplifying Urban Data Fusion with BigSUR

    Get PDF
    Our ability to understand data has always lagged behind our ability to collect it. This is particularly true in urban environments, where mass data capture is particularly valuable, but the objects captured are more varied, dense, and complex. Captured data has several problems; it is unstructured (we do not know which objects are encoded by the data), contains noise (the scanning process is often inaccurate) and omissions (it is often impossible to scan all of a building). To understand the structure and content of the environment, we must process the unstructured data to a structured form. BigSURi is an urban reconstruction algorithm which fuses GIS (Geographic Information System / mapping) data, photogrammetric meshes, and street level photography, to create clean representative, semantically labelled, geometry. However, we have identified three problems with the system i) the street level photography is often difficult to acquire; ii) novel façade styles often frustrate the detection of windows and doors; iii) the computational requirements of the system are large, processing a large city block can take up to 15 hours. Here we describe the process of simplifying and validating the BigSUR semantic reconstruction system. In particular, the requirement for street level images is removed, and a greedy post-process profile assignment is introduced to accelerate the system. We accomplish this by modifying the binary integer programming (BIP) optimization, and re-evaluating the effects of various parameters. The new variant of the system is evaluated over a variety of urban areas. We objectively measure mean squared error (MSE) terms over the unstructured geometry, showing that BigSUR is able to accurately recover omissions and discard inaccuracies in the input data. Further, we evaluate the ability of the system to label the walls and roofs of input meshes, concluding that our new BigSUR variant achieves highly accurate semantic labelling with shorter computational time and less input data

    Automatic Generation of 3D Building Models with Efficient Solar Photovoltaic Generation

    Get PDF
    To facilitate public involvement for sustainable development, 3D models simulating real or near future cities using 3D Computer Graphics (CG) can be of great use. 3D city models are important in environmentally friendly urban planning that will use solar photovoltaic (PV) generation. However, enormous time and labour has to be consumed to create these 3D models using 3D modelling software such as 3ds Max or SketchUp. In order to automate laborious steps, this paper proposes a Geographic Information System (GIS) and CG integrated system that automatically generates 3D building models based on building polygons or building footprints on digital maps, which show most building polygons’ edges meet at right angles (orthogonal polygon). A complicated orthogonal polygon can be partitioned into a set of rectangles. The proposed integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies onto these rectangles. In this paper, for placing solar panels on a hipped roof, the structure of an ordinary hipped roof that is made up of two triangular roof boards and two trapezoidal ones is clarified. To implement efficient PV generation, this paper proposes to automatically generate 3D building models for buildings topped with double shed roofs with overlaid PV arrays. The sizes and positions, slopes of roof boards and main under roof constructions are made clear by presenting a top view and side view of a double shed roof house. For the applied example of the developed system, this papers presents a simulation of the solar photovoltaic generation change of a city block by performing land readjustment and changing the shape of buildings, ordinarily roofed houses or double shed roofed houses suitable for efficient PV generation. Our simulation reveals that double shed roofed houses have greatly improved solar photovoltaic generation

    SEGMENTATION OF 3D PHOTOGRAMMETRIC POINT CLOUD FOR 3D BUILDING MODELING

    Get PDF
    3D city modeling has become important over the last decades as these models are being used in different studies including, energy evaluation, visibility analysis, 3D cadastre, urban planning, change detection, disaster management, etc. Segmentation and classification of photogrammetric or LiDAR data is important for 3D city models as these are the main data sources, and, these tasks are challenging due to their complexity. This study presents research in progress, which focuses on the segmentation and classification of 3D point clouds and orthoimages to generate 3D urban models. The aim is to classify photogrammetric-based point clouds (> 30 pts/sqm) in combination with aerial RGB orthoimages (~ 10 cm, RGB image) in order to name buildings, ground level objects (GLOs), trees, grass areas, and other regions. If on the one hand the classification of aerial orthoimages is foreseen to be a fast approach to get classes and then transfer them from the image to the point cloud space, on the other hand, segmenting a point cloud is expected to be much more time consuming but to provide significant segments from the analyzed scene. For this reason, the proposed method combines segmentation methods on the two geoinformation in order to achieve better results

    Critical factors and guidelines for 3D surveying and modelling in Cultural Heritage

    Get PDF
    The 3D digitization of sites or objects, normally referred to “realitybased 3D surveying and modelling”, is based on 3D optical instruments able to deliver accurate, detailed and realistic 3D results. Nowadays many non-experts are facing the 3D world and its technologies (hardware and software) due to their easiness of use but a not correct use leads to wrong results and conclusions. The goal of the article is to critically report the 3D digitization pipeline with some Cultural Heritage examples. Based on our experiences, some guidelines are drawn as best practices for non-experts and to clearly point out the right approach for every goal and project

    Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning

    Get PDF
    The importance of landscape and heritage recording and documentation with optical remote sensing sensors is well recognized at international level. The continuous development of new sensors, data capture methodologies and multi-resolution 3D representations, contributes significantly to the digital 3D documentation, mapping, conservation and representation of landscapes and heritages and to the growth of research in this field. This article reviews the actual optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications. Examples of 3D surveying and modeling of heritage sites and objects are also shown throughout the paper

    Modeling urban landscapes from point clouds: a generic approach

    Get PDF
    We present a robust method for modeling cities from 3D-point data. Our algorithm provides a more complete description than existing approaches by reconstructing simultaneously buildings, trees and topographically complex grounds. A major contribution of our work is the original way of modeling buildings which guarantees a high generalization level while having semantized and compact representations. Geometric 3D-primitives such as planes, cylinders, spheres or cones describe regular roof sections, and are combined with mesh-patches that represent irregular roof components. The various urban components interact through a non-convex energy minimization problem in which they are propagated under arrangement constraints over a planimetric map. Our approach is experimentally validated on complex buildings and large urban scenes of millions of points and compare it to state-of-the-art methods.Nous présentons une méthode robuste pour modéliser les villes à partir de nuages de points 3D. Notre algorithme fournit une description plus complète que les approches existantes en reconstruisant simultanément bâtiments, arbres et sols topographiquement complexes. Une des contributions importantes réside dans la manière originale de modéliser en 3D les bâtiments, garantissant un niveau de généralisation élevé tout en ayant une représentation compacte et sémantisée. Des primitive géométriques 3D telles que des plans, des cylindres, des sphères ou des cones décrivent les facettes de toit régulières. Elles sont combinées avec des parties de maillages qui représentent les composants de toits irréguliers. Les différents éléments urbains intéragissent au sein d'un problème de minimisation d'énergie non convexe dans lequel ils sont propagés sous des contraintes d'arrangement sur une carte planimétrique. L'approche est validée expérimentalement sur des bâtiments complexes et sur des scènes à grandes échelles contenant des millions de points, et comparée à des méthodes références

    Occlusion-Aware Multi-View Reconstruction of Articulated Objects for Manipulation

    Get PDF
    The goal of this research is to develop algorithms using multiple views to automatically recover complete 3D models of articulated objects in unstructured environments and thereby enable a robotic system to facilitate further manipulation of those objects. First, an algorithm called Procrustes-Lo-RANSAC (PLR) is presented. Structure-from-motion techniques are used to capture 3D point cloud models of an articulated object in two different configurations. Procrustes analysis, combined with a locally optimized RANSAC sampling strategy, facilitates a straightforward geometric approach to recovering the joint axes, as well as classifying them automatically as either revolute or prismatic. The algorithm does not require prior knowledge of the object, nor does it make any assumptions about the planarity of the object or scene. Second, with such a resulting articulated model, a robotic system is then able to manipulate the object either along its joint axes at a specified grasp point in order to exercise its degrees of freedom or move its end effector to a particular position even if the point is not visible in the current view. This is one of the main advantages of the occlusion-aware approach, because the models capture all sides of the object meaning that the robot has knowledge of parts of the object that are not visible in the current view. Experiments with a PUMA 500 robotic arm demonstrate the effectiveness of the approach on a variety of real-world objects containing both revolute and prismatic joints. Third, we improve the proposed approach by using a RGBD sensor (Microsoft Kinect) that yield a depth value for each pixel immediately by the sensor itself rather than requiring correspondence to establish depth. KinectFusion algorithm is applied to produce a single high-quality, geometrically accurate 3D model from which rigid links of the object are segmented and aligned, allowing the joint axes to be estimated using the geometric approach. The improved algorithm does not require artificial markers attached to objects, yields much denser 3D models and reduces the computation time
    corecore