24 research outputs found

    3-D model construction using range and image data

    Get PDF
    This paper deals with the automated creation of geometric and photometric correct 3-D models of the world. Those models can be used for virtual reality, tele-presence, digital cinematography and urban planning applications. The combination of range (dense depth estimates) and image sensing (color information) provides data-sets which allow us to create geometrically correct, photorealistic models of high quality. The 3-D models are first built from range data using a volumetric set intersection method previously developed by us. Photometry can be mapped onto these models by registering features from both the 3-D and 2-D data sets. Range data segmentation algorithms have been developed to identify planar regions, determine linear features from planar intersections that can serve as features for registration with 2-D imagery lines, and reduce the overall complexity of the models. Results are shown for building models of large buildings on our campus using real data acquired from multiple sensors

    Image Sequence Stabilization Through Model Based Registration

    Get PDF
    Acquisition of image series using the digital camera gives a possibility to obtain high resolution/quality animation, much better than while using the digital camcorder. However, there are several problems to deal with when producing animation using such approach. Especially, if motion involves changes in observer position and spatial orientation, the resulting animation may turn out to look choppy and unsmooth. If there is no possibility to provide some hardware based stabilization of the camera during the motion, it is necessary to develop some image processing methods to obtain smooth animation. In this work we deal with the image sequence acquired without stabilization around an object. We propose a method that enables creation of smooth animation using the registration paradigm

    Architectural Scene Reconstruction from Single or Multiple Uncalibrated Images

    Get PDF
    In this paper we present a system for the reconstruction of 3D models of architectural scenes from single or multiple uncalibrated images. The partial 3D model of a building is recovered from a single image using geometric constraints such as parallelism and orthogonality, which are likely to be found in most architectural scenes. The approximate corner positions of a building are selected interactively by a user and then further refined automatically using Hough transform. The relative depths of the corner points are calculated according to the perspective projection model. Partial 3D models recovered from different viewpoints are registered to a common coordinate system for integration. The 3D model registration process is carried out using modified ICP (iterative closest point) algorithm with the initial parameters provided by geometric constraints of the building. The integrated 3D model is then fitted with piecewise planar surfaces to generate a more geometrically consistent model. The acquired images are finally mapped onto the surface of the reconstructed 3D model to create a photo-realistic model. A working system which allows a user to interactively build a 3D model of an architectural scene from single or multiple images has been proposed and implemented

    Toward 3D reconstruction of outdoor scenes using an MMW radar and a monocular vision sensor

    Get PDF
    International audienceIn this paper, we introduce a geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera. We rely on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, on the one hand, and the high spatial resolution of a vision sensor, on the other. Firstly, geometric modeling of each sensor and of the entire system is presented. Secondly, we address the global calibration problem, which consists of finding the exact transformation between the sensors' coordinate systems. Two implementation methods are proposed and compared, based on the optimization of a non-linear criterion obtained from a set of radar-to-image target correspondences. Unlike existing methods, no special configuration of the 3D points is required for calibration. This makes the methods flexible and easy to use by a non-expert operator. Finally, we present a very simple, yet robust 3D reconstruction method based on the sensors' geometry. This method enables one to reconstruct observed features in 3D using one acquisition (static sensor), which is not always met in the state of the art for outdoor scene reconstruction.The proposed methods have been validated with synthetic and real data

    State of research in automatic as-built modelling

    Get PDF
    This is the final version of the article. It first appeared from Elsevier via http://dx.doi.org/10.1016/j.aei.2015.01.001Building Information Models (BIMs) are becoming the official standard in the construction industry for encoding, reusing, and exchanging information about structural assets. Automatically generating such representations for existing assets stirs up the interest of various industrial, academic, and governmental parties, as it is expected to have a high economic impact. The purpose of this paper is to provide a general overview of the as-built modelling process, with focus on the geometric modelling side. Relevant works from the Computer Vision, Geometry Processing, and Civil Engineering communities are presented and compared in terms of their potential to lead to automatic as-built modelling.We acknowledge the support of EPSRC Grant NMZJ/114,DARPA UPSIDE Grant A13–0895-S002, NSF CAREER Grant N. 1054127, European Grant Agreements No. 247586 and 334241. We would also like to thank NSERC Canada, Aecon, and SNC-Lavalin for financially supporting some parts of this research

    The Utilization of Building Information Modeling in Computer-Controlled Automatic Construction: Case Study of a Six-Room Wooden House

    Get PDF
    In the current context, Building Information Modeling (BIM) is belatedly providing the construction industry with a tool to reach higher levels of efficiency, quality and convenience. However, human errors in both management and construction job site control may cause a construction project to go over budget or behind schedule. Lastly, a construction project requires the collaboration of various parties to achieve the end goals of the various stakeholders. BIM provides one method of integrating the whole process of sharing information between those parties. Extensions to the current BIM methods may allow machines, such as construction robots to take over some of the human tasks. The aim of this study is to study future methods to reduce the human effort in construction and to improve the cost efficiency and quality for construction projects. In this thesis to look to integrate the construction processes of design, manufacture, shipment and installation and using data extracted from a BIM model, a conceptual computer-controlled, automatic construction process is developed for a pseudo robot. The pseudo robot is merely a development tool to look at the development of the conceptual phases for a real robot. Meanwhile, following the Plan-Do-Check-Action (PDCA) management cycle, the workflow of the process is designed in pseudocode. A case study of a six-room wooden house is used to illustrate the function of the automatic construction system and to verify that which information can be provided by BIM. Location control is identified in the study as the key criterion for attempting robotic construction. An object positioning solution of using a laser technique is suggested from this research. The results show that the program provides adequate information to allow the completion of the construction process. A two-level method is developed for accurate positioning of building components. Further research may focus on more complicated and special projects, more effective and accurate sensing and tracking technology

    A generalisable framework for saliency-based line segment detection

    Get PDF
    Here we present a novel, information-theoretic salient line segment detector. Existing line detectors typically only use the image gradient to search for potential lines. Consequently, many lines are found, particularly in repetitive scenes. In contrast, our approach detects lines that define regions of significant divergence between pixel intensity or colour statistics. This results in a novel detector that naturally avoids the repetitive parts of a scene while detecting the strong, discriminative lines present. We furthermore use our approach as a saliency filter on existing line detectors to more efficiently detect salient line segments. The approach is highly generalisable, depending only on image statistics rather than image gradient; and this is demonstrated by an extension to depth imagery. Our work is evaluated against a number of other line detectors and a quantitative evaluation demonstrates a significant improvement over existing line detectors for a range of image transformation
    corecore