18 research outputs found

    Skull assembly and completion using template-based surface matching

    No full text
    Conference Name:2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 3DIMPVT 2011. Conference Address: Hangzhou, China. Time:May 16, 2011 - May 19, 2011.Natural Science Foundation of China; Hangzhou Dianzi University; Peking Univ., Key Lab of Mach. Percept. (Minist. Educ.); Zhejiang University; Microsoft ResearchWe present a skull assembly and completion framework based on shape matching. In order to assemble fragmented skulls, we need to compute rigid transformations from these fragments to their assembled geometry. We develop a reliable assembly pipeline where each fragment is matched and transformed to be aligned with the template. In order to further complete the assembled skull with several damaged regions, we use the template to repair damaged regions on the assembled skull. The entire pipeline has been conducted on several real skull models and demonstrated great robustness and effectiveness. ? 2011 IEEE

    Building Information Modeling (BIM) for existing buildings - literature review and future needs

    Get PDF
    Abstract not availableRebekka Volk, Julian Stengel, Frank Schultman

    Consistent Density Scanning and Information Extraction From Point Clouds of Building Interiors

    Get PDF
    Over the last decade, 3D range scanning systems have improved considerably enabling the designers to capture large and complex domains such as building interiors. The captured point cloud is processed to extract specific Building Information Models, where the main research challenge is to simultaneously handle huge and cohesive point clouds representing multiple objects, occluded features and vast geometric diversity. These domain characteristics increase the data complexities and thus make it difficult to extract accurate information models from the captured point clouds. The research work presented in this thesis improves the information extraction pipeline with the development of novel algorithms for consistent density scanning and information extraction automation for building interiors. A restricted density-based, scan planning methodology computes the number of scans to cover large linear domains while ensuring desired data density and reducing rigorous post-processing of data sets. The research work further develops effective algorithms to transform the captured data into information models in terms of domain features (layouts), meaningful data clusters (segmented data) and specific shape attributes (occluded boundaries) having better practical utility. Initially, a direct point-based simplification and layout extraction algorithm is presented that can handle the cohesive point clouds by adaptive simplification and an accurate layout extraction approach without generating an intermediate model. Further, three information extraction algorithms are presented that transforms point clouds into meaningful clusters. The novelty of these algorithms lies in the fact that they work directly on point clouds by exploiting their inherent characteristic. First a rapid data clustering algorithm is presented to quickly identify objects in the scanned scene using a robust hue, saturation and value (H S V) color model for better scene understanding. A hierarchical clustering algorithm is developed to handle the vast geometric diversity ranging from planar walls to complex freeform objects. The shape adaptive parameters help to segment planar as well as complex interiors whereas combining color and geometry based segmentation criterion improves clustering reliability and identifies unique clusters from geometrically similar regions. Finally, a progressive scan line based, side-ratio constraint algorithm is presented to identify occluded boundary data points by investigating their spatial discontinuity

    Point clouds and thermal data fusion for automated gbXML-based building geometry model generation

    Get PDF
    Existing residential and small commercial buildings now represent the greatest opportunity to improve building energy efficiency. Building energy simulation analysis is becoming increasingly important because the analysis results can assist the decision makers to make decisions on improving building energy efficiency and reducing environmental impacts. However, manually measuring as-is conditions of building envelops including geometry and thermal value is still a labor-intensive, costly, and slow process. Thus, the primary objective of this research was to automatically collect and extract the as-is geometry and thermal data of the building envelope components and create a gbXML-based building geometry model. In the proposed methodology, a rapid and low-cost data collection hardware system was designed by integrating 3D laser scanners and an infrared (IR) camera. Secondly, several algorithms were created to automatically recognize various components of building envelope as objects from collected raw data. The extracted 3D semantic geometric model was then automatically saved as an industry standard file format for data interoperability. The feasibility of the proposed method was validated through three case studies. The contributions of this research include 1) a customized low-cost hybrid data collection system development to fuse various data into a thermal point cloud; 2) an automatic method of extracting building envelope components and its geometry data to generate gbXML-based building geometry model. The broader impacts of this research are that it could offer a new way to collect as is building data without impeding occupants’ daily life, and provide an easier way for laypeople to understand the energy performance of their buildings via 3D thermal point cloud visualization.Ph.D

    Building Information Modeling (BIM) for existing buildings — Literature review and future needs

    Full text link

    Multi-Scale Hierarchical Conditional Random Field for Railway Electrification Scene Classification Using Mobile Laser Scanning Data

    Get PDF
    With the recent rapid development of high-speed railway in many countries, precise inspection for railway electrification systems has become more significant to ensure safe railway operation. However, this time-consuming manual inspection is not satisfactory for the high-demanding inspection task, thus a safe, fast and automatic inspection method is required. With LiDAR (Light Detection and Ranging) data becoming more available, the accurate railway electrification scene understanding using LiDAR data becomes feasible towards automatic 3D precise inspection. This thesis presents a supervised learning method to classify railway electrification objects from Mobile Laser Scanning (MLS) data. First, a multi-range Conditional Random Field (CRF), which characterizes not only labeling homogeneity at a short range, but also the layout compatibility between different objects at a middle range in the probabilistic graphical model is implemented and tested. Then, this multi-range CRF model will be extended and improved into a hierarchical CRF model to consider multi-scale layout compatibility at full range. The proposed method is evaluated on a dataset collected in Korea with complex railway electrification systems environment. The experiment shows the effectiveness of proposed model

    Vision-Based 2D and 3D Human Activity Recognition

    Get PDF

    Automatic Landmarking for Non-cooperative 3D Face Recognition

    Get PDF
    This thesis describes a new framework for 3D surface landmarking and evaluates its performance for feature localisation on human faces. This framework has two main parts that can be designed and optimised independently. The first one is a keypoint detection system that returns positions of interest for a given mesh surface by using a learnt dictionary of local shapes. The second one is a labelling system, using model fitting approaches that establish a one-to-one correspondence between the set of unlabelled input points and a learnt representation of the class of object to detect. Our keypoint detection system returns local maxima over score maps that are generated from an arbitrarily large set of local shape descriptors. The distributions of these descriptors (scalars or histograms) are learnt for known landmark positions on a training dataset in order to generate a model. The similarity between the input descriptor value for a given vertex and a model shape is used as a descriptor-related score. Our labelling system can make use of both hypergraph matching techniques and rigid registration techniques to reduce the ambiguity attached to unlabelled input keypoints for which a list of model landmark candidates have been seeded. The soft matching techniques use multi-attributed hyperedges to reduce ambiguity, while the registration techniques use scale-adapted rigid transformation computed from 3 or more points in order to obtain one-to-one correspondences. Our final system achieves better or comparable (depending on the metric) results than the state-of-the-art while being more generic. It does not require pre-processing such as cropping, spike removal and hole filling and is more robust to occlusion of salient local regions, such as those near the nose tip and inner eye corners. It is also fully pose invariant and can be used with kinds of objects other than faces, provided that labelled training data is available

    Time-of-Flight Cameras and Microsoft Kinect™

    Full text link
    corecore