9,043 research outputs found

    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories

    Full text link
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. While highly innovative and effective, these products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide a radical alternative to such data-intensive procedures by presentingWalk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both trajectories simulated from scanned models of interiors and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm that our technique is viable and allows recovering reliable floor plans from minimal walk trajectory data

    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories

    Get PDF
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. While highly innovative and effective, these products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide a radical alternative to such data-intensive procedures by presenting Walk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both trajectories simulated from scanned models of interiors and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm that our technique is viable and allows recovering reliable floor plans from minimal walk trajectory data

    Walk2Map: Extracting Floor Plans from Indoor Walk Trajectories

    Get PDF
    Recent years have seen a proliferation of new digital products for the efficient management of indoor spaces, with important applications like emergency management, virtual property showcasing and interior design. These products rely on accurate 3D models of the environments considered, including information on both architectural and non-permanent elements. These models must be created from measured data such as RGB-D images or 3D point clouds, whose capture and consolidation involves lengthy data workflows. This strongly limits the rate at which 3D models can be produced, preventing the adoption of many digital services for indoor space management. We provide an alternative to such data-intensive procedures by presenting Walk2Map, a data-driven approach to generate floor plans only from trajectories of a person walking inside the rooms. Thanks to recent advances in data-driven inertial odometry, such minimalistic input data can be acquired from the IMU readings of consumer-level smartphones, which allows for an effortless and scalable mapping of real-world indoor spaces. Our work is based on learning the latent relation between an indoor walk trajectory and the information represented in a floor plan: interior space footprint, portals, and furniture. We distinguish between recovering area-related (interior footprint, furniture) and wall-related (doors) information and use two different neural architectures for the two tasks: an image-based Encoder-Decoder and a Graph Convolutional Network, respectively. We train our networks using scanned 3D indoor models and apply them in a cascaded fashion on an indoor walk trajectory at inference time. We perform a qualitative and quantitative evaluation using both simulated and measured, real-world trajectories, and compare against a baseline method for image-to-image translation. The experiments confirm the feasibility of our approach.Comment: To be published in Computer Graphics Forum (Proc. Eurographics 2021

    HYBRID GIS-BIM APPROACH FOR THE TORINO DIGITAL-TWIN: THE IMPLEMENTATION OF A FLOOR-LEVEL 3D CITY GEODATABASE

    Get PDF
    The research tries to present a preliminary work into geo-spatial management of public administration assets thanks to interoperability of BIM-GIS models, related to urban scale scenarios. The strategy proposed tries to deepen the management, conversion and integration of databases related to public assets and particularly schools building, and related them into city-related geo-databases. The methodology, based on the real scenario of Torino Municipality and their needs addressed in recent studies in collaboration with FULL – Future Urban Legacy Lab from Politecnico di Torino, take advantage from the availability of two test dataset at different scale, with different potential and bottlenecks. The idea of developing a 3D digital twin of Torino actually stop long before the 3D city modelling only, but rather we have to deal with the integration and harmonization of existing databases. These data collections are often coming from different updating and based on non-homogeneous languages and data models. The data are often in table format and managed by different offices and as many management systems. Moreover, recently public administrations as the one of Torino, have increase availability of BIM models, especially for public assets, which need to be known, archived, and localized in a geographic dimension in order to benefit from the real strategic potential of a spatial-enabled facility management platform as Digital Twin. Combining the use of parametric modeler software (Revit) and visual programming language (Dynamo), the proposed methodology tries to elaborate rules on a set of shared language parameters (characterizing the buildings as attributes in both datasets: ID; address; construction; floors; rooms dimensions, use, floor; height; glass surfaces). This is tested as conversion workflow between the Municipality DB and the BIM model. This solution firstly allows the interaction and query between models, and then it proposes open issues once the enriched BIM model is imported into the geographical dimension of the Torino 3D city model Digital Twin (ArcGIS Pro platform), according to LOD standards, and enriched with semantic components from municipality D

    HYBRID GIS-BIM APPROACH FOR THE TORINO DIGITAL-TWIN: THE IMPLEMENTATION OF A FLOOR-LEVEL 3D CITY GEODATABASE

    Get PDF
    Abstract. The research tries to present a preliminary work into geo-spatial management of public administration assets thanks to interoperability of BIM-GIS models, related to urban scale scenarios. The strategy proposed tries to deepen the management, conversion and integration of databases related to public assets and particularly schools building, and related them into city-related geo-databases. The methodology, based on the real scenario of Torino Municipality and their needs addressed in recent studies in collaboration with FULL – Future Urban Legacy Lab from Politecnico di Torino, take advantage from the availability of two test dataset at different scale, with different potential and bottlenecks. The idea of developing a 3D digital twin of Torino actually stop long before the 3D city modelling only, but rather we have to deal with the integration and harmonization of existing databases. These data collections are often coming from different updating and based on non-homogeneous languages and data models. The data are often in table format and managed by different offices and as many management systems. Moreover, recently public administrations as the one of Torino, have increase availability of BIM models, especially for public assets, which need to be known, archived, and localized in a geographic dimension in order to benefit from the real strategic potential of a spatial-enabled facility management platform as Digital Twin. Combining the use of parametric modeler software (Revit) and visual programming language (Dynamo), the proposed methodology tries to elaborate rules on a set of shared language parameters (characterizing the buildings as attributes in both datasets: ID; address; construction; floors; rooms dimensions, use, floor; height; glass surfaces). This is tested as conversion workflow between the Municipality DB and the BIM model. This solution firstly allows the interaction and query between models, and then it proposes open issues once the enriched BIM model is imported into the geographical dimension of the Torino 3D city model Digital Twin (ArcGIS Pro platform), according to LOD standards, and enriched with semantic components from municipality DB

    3D Reconstruction of Indoor Corridor Models Using Single Imagery and Video Sequences

    Get PDF
    In recent years, 3D indoor modeling has gained more attention due to its role in decision-making process of maintaining the status and managing the security of building indoor spaces. In this thesis, the problem of continuous indoor corridor space modeling has been tackled through two approaches. The first approach develops a modeling method based on middle-level perceptual organization. The second approach develops a visual Simultaneous Localisation and Mapping (SLAM) system with model-based loop closure. In the first approach, the image space was searched for a corridor layout that can be converted into a geometrically accurate 3D model. Manhattan rule assumption was adopted, and indoor corridor layout hypotheses were generated through a random rule-based intersection of image physical line segments and virtual rays of orthogonal vanishing points. Volumetric reasoning, correspondences to physical edges, orientation map and geometric context of an image are all considered for scoring layout hypotheses. This approach provides physically plausible solutions while facing objects or occlusions in a corridor scene. In the second approach, Layout SLAM is introduced. Layout SLAM performs camera localization while maps layout corners and normal point features in 3D space. Here, a new feature matching cost function was proposed considering both local and global context information. In addition, a rotation compensation variable makes Layout SLAM robust against cameras orientation errors accumulations. Moreover, layout model matching of keyframes insures accurate loop closures that prevent miss-association of newly visited landmarks to previously visited scene parts. The comparison of generated single image-based 3D models to ground truth models showed that average ratio differences in widths, heights and lengths were 1.8%, 3.7% and 19.2% respectively. Moreover, Layout SLAM performed with the maximum absolute trajectory error of 2.4m in position and 8.2 degree in orientation for approximately 318m path on RAWSEEDS data set. Loop closing was strongly performed for Layout SLAM and provided 3D indoor corridor layouts with less than 1.05m displacement errors in length and less than 20cm in width and height for approximately 315m path on York University data set. The proposed methods can successfully generate 3D indoor corridor models compared to their major counterpart
    • …
    corecore