7 research outputs found

    CityJSON: does (file) size matter?

    No full text
    The possibilities of 3D city models for the analysis of the built environment are increasingly explored, and there is a continuous development in improvements on their inner workings. They are also used more often on the web, for example for visualisation but it is also possible to query, analyse, and edit data in this way. In this case, the network becomes a potential new bottleneck in time performance. Especially when a 3D city model contains a lot of attributes, there is a rapid increase in file size when the area of study is expanded.This presents challenges in efficiency and in this thesis I focus on the improvement of the inner workings of 3D city models to attempt to relieve this problem, in specific for spreading and using them more efficiently on the web.By investigating and testing different compression techniques on 3D city models stored in the CityJSON format, I have attempted to relieve this problem. CityJSON is already more compact than CityGML and these techniques decrease the file sizes of datasets even further, allowing for faster transmission over a network. But on the other hand, additional steps are needed to process compressed files. The goal of using a compression technique is to result in a net speed gain, meaning that the time that is saved on download time should be larger than the additional time that it costs to process the file before transmission (compressing the data on the server) and after receival (decompressing the data in the client).There are compression techniques for both general and specific purposes, and I have used combinations of them. Amongst these are Draco compression, zlib, CBOR, and a self-created technique. Specific ones are used for the main characteristics that CityJSON datasets have, which are the JSON structure, feature attributes, and feature geometries. To assess the impact that different combinations of compression techniques have, I take uncompressed CityJSON performance as the baseline and have come up with different performance indicators that include several use cases such as visualisation, querying, analysis, and editing.I have benchmarked all combinations of compression techniques on the use cases of these performance indicators. For this I have created two types of server implementations: one with which datasets are compressed beforehand and are processed in the client based on the request made by the user, and one where the data is processed first and only then compressed and transmitted to the server. In the results, you can see the best-performing compression type per use case.The benchmarking is performed on a variety of datasets that are split into four categories: larger datasets, larger datasets without attributes, smaller datasets, and smaller datasets without attributes. This ultimately makes for use cases that are very specific and choosing suitable compression types requires finding out which ones perform relatively well in most cases, and are not difficult to implement in order to keep CityJSON a simple file format. It turns out that Draco compression can give good results in specifc situations, but in general is not good to use. Not only regarding performance, but also from a developer point of view. CBOR, zlib, and a combination of these two are easy to use and generally affect the performance of CityJSON on the web in a good way.https://github.com/jliempt/CityJSON-does-file-size-matter-Geomatic

    Automated 3D Reconstruction of LoD2 and LoD1 Models for All 10 Million Buildings of the Netherlands

    No full text
    In this paper, we present our workflow to automatically reconstruct three-dimensional (3D) building models based on two-dimensional building polygons and a lidar point cloud. The workflow generates models at different levels of detail (LoDs) to support data require-ments of different applications from one consistent source. Specific attention has been paid to make the workflow robust to quickly run a new iteration in case of improvements in an algorithm or in case new input data become available. The quality of the reconstructed data highly depends on the quality of the input data and is monitored in several steps of the process. A 3D viewer has been developed to view and download the openly available 3D data at different LoDs in different formats. The workflow has been applied to all 10 million buildings of the Netherlands. The 3D ser-vice will be updated after new input data becomes available.Urban Data Scienc

    LoD2 voor alle 10 miljoen BAG-panden in Nederland

    No full text
    3D-toepassingen gaan vaak gepaard met de wens om gebouwen met dakvormen te modelleren. Na jaren onderzoek en ontwikkeling hebben we in Delft een methode gerealiseerd die volledig automatisch dakvormen (LoD2)reconstrueert uit puntenwolken en 2D-pandpolygonen. Met deze methode hebben we 3D-modellen gegenereerd voor alle 10 miljoen BAG-panden in Nederland, de eerste open 3D-dataset op dit detailniveau. Niet alle toepassingen zijn gebaat bij dit detailniveau. Daarom reconstrueren we in hetzelfde proces ook andere detailniveaus. Het volledig automatisch proceszorgt ook in de toekomst voor consistentie als nieuwe modellen worden geconstrueerd met actuele input-data. Bovendien monitoren we verschillende kwaliteitsparameters die gebruikers kunnen helpen bij de juiste toepassing van de data.Urban Data Scienc

    Quality assessment of a nationwide data set containing automatically reconstructed 3d building models

    No full text
    Fully automated reconstruction of high-detail building models on a national scale is challenging. It raises a set of problems that are seldom found when processing smaller areas, single cities. Often there is no reference, ground truth available to evaluate the quality of the reconstructed models. Therefore, only relative quality metrics are computed, comparing the models to the source data sets. In the paper we present a set of relative quality metrics that we use for assessing the quality of 3D building models, that were reconstructed in a fully automated process, in Levels of Detail 1.2, 1.3, 2.2 for the whole of the Netherlands. The source data sets for the reconstruction are the Dutch Building and Address Register (BAG) and the National Height Model (AHN). The quality assessment is done by comparing the building models to these two data sources. The work presented in this paper lays the foundation for future research on the quality control and management of automated building reconstruction. Additionally, it serves as an important step in our ongoing effort for a fully automated building reconstruction method of high-detail, high-quality models.Urban Data ScienceUrbanis

    GeoBIM information to check digital building permit regulations

    No full text
    Municipalities invest a lot of time and person-hours into manual building permit checks. With the increase in computational power and the use of Building Information Models (BIM) in the building design life cycle, several municipalities are investing in automating these checks using BIM and geo-data sets. However, few examples exist of tools effectively using geo-information with BIM. In order to address this gap, a project was developed with the municipality of Rotterdam (NL). In a previous phase, a tool was implemented, able to analyse the BIM data to extract the needed information for a few representative regulations. In order to extend and improve the previously developed tool, a web-based interface is now implemented and geo-data sets are integrated to the process allowing more powerful GeoBIM analysis. Three checks using both BIM and GIS data were implemented and tested: (i) The parcel limit check evaluates if the building's footprint derived from BIM falls within the parcel limit provided from the municipalities' parcel data sets. (ii) The height check evaluates the maximum building's relative height to the road's height. Finally, (iii) the road overhang check detects neighbouring roads to the parcel and evaluates the admissible overhang over that road. This paper presents these developments, including the type of input data that is needed for the checks, the tools for the three new GeoBIM checks (parcel limit check, height check, road overhang check) and the implementation in the web-based tool. Urban Data Scienc

    Reference study of CityGML software support: The GeoBIM benchmark 2019—Part II

    No full text
    OGC CityGML is an open standard for 3D city models intended to foster interoperability and support various applications. However, through our practical experience and discussions with practitioners, we have noticed several problems related to the implementation of the standard and the use of standardized data. Nevertheless, a systematic investigation of these issues has never been carried out, and there is thus insufficient evidence for tackling the problems. The GeoBIM benchmark project is aimed at finding such evidence by involving external volunteers, reporting on various aspects of the behavior of tools (geometry, semantics, georeferencing, functionalities), analyzed and described in this article. This study explicitly pointed out the critical points embedded in the format as an evidence base for future development. A companion article (Part I) describes the results of the benchmark related to IFC, the counterpart of CityGML within building information modeling.Urban Data Scienc

    Reference study of IFC software support: The GeoBIM benchmark 2019—Part I

    No full text
    Industry Foundation Classes (IFC), the buildingSMART open standard for BIM, is underused with respect to its promising potential, since, according to the experience of practitioners and researchers working with BIM, issues in the standard’s implementation and use prevent its effective use. Nevertheless, a systematic investigation of these issues has never been carried out, and there is thus insufficient evidence for tackling the problems. The GeoBIM benchmark project is aimed at finding such evidence by involving external volunteers, reporting on various aspects of the behavior of tools (geometry, semantics, georeferencing, functionalities), analyzed and described in this article. Interestingly, different IFC software programs with the same standardized data sets yield inconsistent results, with few detectable common patterns, and significant issues are found in their support of the standard, probably due to the very high complexity of the standard data model. A companion article (Part II) describes the results of the benchmark related to CityGML, the counterpart of IFC within geoinformation.Urban Data Scienc
    corecore