4,985 research outputs found

    Towards 3D Matching of Point Clouds Derived from Oblique and Nadir Airborne Imagery

    Get PDF
    Because of the low-expense high-efficient image collection process and the rich 3D and texture information presented in the images, a combined use of 2D airborne nadir and oblique images to reconstruct 3D geometric scene has a promising market for future commercial usage like urban planning or first responders. The methodology introduced in this thesis provides a feasible way towards fully automated 3D city modeling from oblique and nadir airborne imagery. In this thesis, the difficulty of matching 2D images with large disparity is avoided by grouping the images first and applying the 3D registration afterward. The procedure starts with the extraction of point clouds using a modified version of the RIT 3D Extraction Workflow. Then the point clouds are refined by noise removal and surface smoothing processes. Since the point clouds extracted from different image groups use independent coordinate systems, there are translation, rotation and scale differences existing. To figure out these differences, 3D keypoints and their features are extracted. For each pair of point clouds, an initial alignment and a more accurate registration are applied in succession. The final transform matrix presents the parameters describing the translation, rotation and scale requirements. The methodology presented in the thesis has been shown to behave well for test data. The robustness of this method is discussed by adding artificial noise to the test data. For Pictometry oblique aerial imagery, the initial alignment provides a rough alignment result, which contains a larger offset compared to that of test data because of the low quality of the point clouds themselves, but it can be further refined through the final optimization. The accuracy of the final registration result is evaluated by comparing it to the result obtained from manual selection of matched points. Using the method introduced, point clouds extracted from different image groups could be combined with each other to build a more complete point cloud, or be used as a complement to existing point clouds extracted from other sources. This research will both improve the state of the art of 3D city modeling and inspire new ideas in related fields

    Calibration of full-waveform airborne laser scanning data for 3D object segmentation

    Get PDF
    Phd ThesisAirborne Laser Scanning (ALS) is a fully commercial technology, which has seen rapid uptake from the photogrammetry and remote sensing community to classify surface features and enhance automatic object recognition and extraction processes. 3D object segmentation is considered as one of the major research topics in the field of laser scanning for feature recognition and object extraction applications. The demand for automatic segmentation has significantly increased with the emergence of full-waveform (FWF) ALS, which potentially offers an unlimited number of return echoes. FWF has shown potential to improve available segmentation and classification techniques through exploiting the additional physical observables which are provided alongside the standard geometric information. However, use of the FWF additional information is not recommended without prior radiometric calibration, taking into consideration all the parameters affecting the backscattered energy. The main focus of this research is to calibrate the additional information from FWF to develop the potential of point clouds for segmentation algorithms. Echo amplitude normalisation as a function of local incidence angle was identified as a particularly critical aspect, and a novel echo amplitude normalisation approach, termed the Robust Surface Normal (RSN) method, has been developed. Following the radar equation, a comprehensive radiometric calibration routine is introduced to account for all variables affecting the backscattered laser signal. Thereafter, a segmentation algorithm is developed, which utilises the raw 3D point clouds to estimate the normal for individual echoes based on the RSN method. The segmentation criterion is selected as the normal vector augmented by the calibrated backscatter signals. The developed segmentation routine aims to fully integrate FWF data to improve feature recognition and 3D object segmentation applications. The routine was tested over various feature types from two datasets with different properties to assess its potential. The results are compared to those delivered through utilizing only geometric information, without the additional FWF radiometric information, to assess performance over existing methods. The results approved the potential of the FWF additional observables to improve segmentation algorithms. The new approach was validated against manual segmentation results, revealing a successful automatic implementation and achieving an accuracy of 82%

    Structure from Action: Learning Interactions for Articulated Object 3D Structure Discovery

    Full text link
    Articulated objects are abundant in daily life. Discovering their parts, joints, and kinematics is crucial for robots to interact with these objects. We introduce Structure from Action (SfA), a framework that discovers the 3D part geometry and joint parameters of unseen articulated objects via a sequence of inferred interactions. Our key insight is that 3D interaction and perception should be considered in conjunction to construct 3D articulated CAD models, especially in the case of categories not seen during training. By selecting informative interactions, SfA discovers parts and reveals initially occluded surfaces, like the inside of a closed drawer. By aggregating visual observations in 3D, SfA accurately segments multiple parts, reconstructs part geometry, and infers all joint parameters in a canonical coordinate frame. Our experiments demonstrate that a single SfA model trained in simulation can generalize to many unseen object categories with unknown kinematic structures and to real-world objects. Code and data will be publicly available

    Marine Heritage Monitoring with High Resolution Survey Tools: ScapaMAP 2001-2006

    Get PDF
    Archaeologically, marine sites can be just as significant as those on land. Until recently, however, they were not protected in the UK to the same degree, leading to degradation of sites; the difficulty of investigating such sites still makes it problematic and expensive to properly describe, schedule and monitor them. Use of conventional high-resolution survey tools in an archaeological context is changing the economic structure of such investigations however, and it is now possible to remotely but routinely monitor the state of submerged cultural artifacts. Use of such data to optimize expenditure of expensive and rare assets (e.g., divers and on-bottom dive time) is an added bonus. We present here the results of an investigation into methods for monitoring of marine heritage sites, using the remains of the Imperial German Navy (scuttled 1919) in Scapa Flow, Orkney as a case study. Using a baseline bathymetric survey in 2001 and a repeat bathymetric and volumetric survey in 2006, we illustrate the requirements for such surveys over and above normal hydrographic protocols and outline strategies for effective imaging of large wrecks. Suggested methods for manipulation of such data (including processing and visualization) are outlined, and we draw the distinction between products for scientific investigation and those for outreach and education, which have very different requirements. We then describe the use of backscatter and volumetric acoustic data in the investigation of wrecks, focusing on the extra information to be gained from them that is not evident in the traditional bathymetric DTM models or sounding point-cloud representations of data. Finally, we consider the utility of high-resolution survey as part of an integrated site management policy, with particular reference to the economics of marine heritage monitoring and preservation

    Perception de la géométrie de l'environnement pour la navigation autonome

    Get PDF
    Le but de de la recherche en robotique mobile est de donner aux robots la capacité d'accomplir des missions dans un environnement qui n'est pas parfaitement connu. Mission, qui consiste en l'exécution d'un certain nombre d'actions élémentaires (déplacement, manipulation d'objets...) et qui nécessite une localisation précise, ainsi que la construction d'un bon modÚle géométrique de l'environnement, a partir de l'exploitation de ses propres capteurs, des capteurs externes, de l'information provenant d'autres robots et de modÚle existant, par exemple d'un systÚme d'information géographique. L'information commune est la géométrie de l'environnement. La premiÚre partie du manuscrit couvre les différents méthodes d'extraction de l'information géométrique. La seconde partie présente la création d'un modÚle géométrique en utilisant un graphe, ainsi qu'une méthode pour extraire de l'information du graphe et permettre au robot de se localiser dans l'environnement.The goal of the mobile robotic research is to give robots the capability to accomplish missions in an environment that might be unknown. To accomplish his mission, the robot need to execute a given set of elementary actions (movement, manipulation of objects...) which require an accurate localisation of the robot, as well as a the construction of good geometric model of the environment. Thus, a robot will need to take the most out of his own sensors, of external sensors, of information coming from an other robot and of existing model coming from a Geographic Information System. The common information is the geometry of the environment. The first part of the presentation will be about the different methods to extract geometric information. The second part will be about the creation of the geometric model using a graph structure, along with a method to retrieve information in the graph to allow the robot to localise itself in the environment

    CHORUS Deliverable 2.2: Second report - identification of multi-disciplinary key issues for gap analysis toward EU multimedia search engines roadmap

    Get PDF
    After addressing the state-of-the-art during the first year of Chorus and establishing the existing landscape in multimedia search engines, we have identified and analyzed gaps within European research effort during our second year. In this period we focused on three directions, notably technological issues, user-centred issues and use-cases and socio- economic and legal aspects. These were assessed by two central studies: firstly, a concerted vision of functional breakdown of generic multimedia search engine, and secondly, a representative use-cases descriptions with the related discussion on requirement for technological challenges. Both studies have been carried out in cooperation and consultation with the community at large through EC concertation meetings (multimedia search engines cluster), several meetings with our Think-Tank, presentations in international conferences, and surveys addressed to EU projects coordinators as well as National initiatives coordinators. Based on the obtained feedback we identified two types of gaps, namely core technological gaps that involve research challenges, and “enablers”, which are not necessarily technical research challenges, but have impact on innovation progress. New socio-economic trends are presented as well as emerging legal challenges

    An investigation into semi-automated 3D city modelling

    Get PDF
    Creating three dimensional digital representations of urban areas, also known as 3D city modelling, is essential in many applications, such as urban planning, radio frequency signal propagation, flight simulation and vehicle navigation, which are of increasing importance in modern society urban centres. The main aim of the thesis is the development of a semi-automated, innovative workflow for creating 3D city models using aerial photographs and LiDAR data collected from various airborne sensors. The complexity of this aim necessitates the development of an efficient and reliable way to progress from manually intensive operations to an increased level of automation. The proposed methodology exploits the combination of different datasets, also known as data fusion, to achieve reliable results in different study areas. Data fusion techniques are used to combine linear features, extracted from aerial photographs, with either LiDAR data or any other source available including Very Dense Digital Surface Models (VDDSMs). The research proposes a method which employs a semi automated technique for 3D city modelling by fusing LiDAR if available or VDDSMs with 3D linear features extracted from stereo pairs of photographs. The building detection and the generation of the building footprint is performed with the use of a plane fitting algorithm on the LiDAR or VDDSMs using conditions based on the slope of the roofs and the minimum size of the buildings. The initial building footprint is subsequently generalized using a simplification algorithm that enhances the orthogonality between the individual linear segments within a defined tolerance. The final refinement of the building outline is performed for each linear segment using the filtered stereo matched points with a least squares estimation. The digital reconstruction of the roof shapes is performed by implementing a least squares-plane fitting algorithm on the classified VDDSMs, which is restricted by the building outlines, the minimum size of the planes and the maximum height tolerance between adjacent 3D points. Subsequently neighbouring planes are merged using Boolean operations for generation of solid features. The results indicate very detailed building models. Various roof details such as dormers and chimneys are successfully reconstructed in most cases

    Segmentation and Deformable Modelling Techniques for a Virtual Reality Surgical Simulator in Hepatic Oncology

    No full text
    Liver surgical resection is one of the most frequently used curative therapies. However, resectability is problematic. There is a need for a computer-assisted surgical planning and simulation system which can accurately and efficiently simulate the liver, vessels and tumours in actual patients. The present project describes the development of these core segmentation and deformable modelling techniques. For precise detection of irregularly shaped areas with indistinct boundaries, the segmentation incorporated active contours - gradient vector flow (GVF) snakes and level sets. To improve efficiency, a chessboard distance transform was used to replace part of the GVF effort. To automatically initialize the liver volume detection process, a rotating template was introduced to locate the starting slice. For shape maintenance during the segmentation process, a simplified object shape learning step was introduced to avoid occasional significant errors. Skeletonization with fuzzy connectedness was used for vessel segmentation. To achieve real-time interactivity, the deformation regime of this system was based on a single-organ mass-spring system (MSS), which introduced an on-the-fly local mesh refinement to raise the deformation accuracy and the mesh control quality. This method was now extended to a multiple soft-tissue constraint system, by supplementing it with an adaptive constraint mesh generation. A mesh quality measure was tailored based on a wide comparison of classic measures. Adjustable feature and parameter settings were thus provided, to make tissues of interest distinct from adjacent structures, keeping the mesh suitable for on-line topological transformation and deformation. More than 20 actual patient CT and 2 magnetic resonance imaging (MRI) liver datasets were tested to evaluate the performance of the segmentation method. Instrument manipulations of probing, grasping, and simple cutting were successfully simulated on deformable constraint liver tissue models. This project was implemented in conjunction with the Division of Surgery, Hammersmith Hospital, London; the preliminary reality effect was judged satisfactory by the consultant hepatic surgeon

    Share - Publish - Store - Preserve. Methodologies, Tools and Challenges for 3D Use in Social Sciences and Humanities

    Get PDF
    Through this White Paper, which gathers contributions from experts of 3D data as well as professionals concerned with the interoperability and sustainability of 3D research data, the PARTHENOS project aims at highlighting some of the current issues they have to face, with possible specific points according to the discipline, and potential practices and methodologies to deal with these issues. During the workshop, several tools to deal with these issues have been introduced and confronted with the participants experiences, this White Paper now intends to go further by also integrating participants feedbacks and suggestions of potential improvements. Therefore, even if the focus is put on specific tools, the main goal is to contribute to the development of standardized good practices related to the sharing, publication, storage and long-term preservation of 3D data
    • 

    corecore