13,292 research outputs found
Accelerating point cloud cleaning
Capturing the geometry of a large heritage site via laser scanning can produce thousands of high resolution range scans. These must be cleaned to remove unwanted artefacts. We identified three areas that can be improved upon in order to accelerate the cleaning process. Firstly the speed at which the a user can navigate to an area of interest has a direct impact on task duration. Secondly, design constraints in generalised point cloud editing software result in inefficient abstraction of layers that may extend a task duration due to memory pressure. Finally, existing semi-automated segmentation tools have difficulty targeting the diverse set of segmentation targets in heritage scans. We present a point cloud cleaning framework that attempts to improve each of these areas. First, we present a novel layering technique aimed at segmentation, rather than generic point cloud editing. This technique represents 'layers' of related points in a way that greatly reduces memory consumption and provides efficient set operations between layers. These set operations (union, difference, intersection) allow the creation of new layers which aid in the segmentation task. Next, we introduce roll-corrected 3D camera navigation that allows a user to look around freely while reducing disorientation. A user study shows that this camera mode significantly reduces a user's navigation time (29.8% to 57.8%) between locations in a large point cloud thus reducing the overhead between point selection operations. Finally, we show how Random Forests can be trained interactively, per scan, to assist users in a point cloud cleaning task. We use a set of features selected for their discriminative power on a set of challenging heritage scans. Interactivity is achieved by down-sampling training data on the fly. A simple map data structure allows us to propagate labels in the down-sampled data back to the input point set. We show that training and classification on down-sampled point clouds can be performed in under 10 seconds with little effect on accuracy. A user study shows that a user's total segmentation time decreases between 8.9% and 20.4% when our Random Forest classifier is used. Although this initial study did not indicate a significant difference in overall task performance when compared to manual segmentation, performance improvement is likely with multi-resolution features or the use of colour range images, which are now commonplace
Accelerating Point Cloud Cleaning
A laser scanning campaign to capture the geometry of a large heritage site can produce thousands of high resolution range scans. These must be cleaned to remove noise and artefacts. To accelerate the cleaning task, we can i) reduce the time required for batch-processing tasks, ii) reduce user interaction time, or iii) replace interactive tasks with more efficient automated algorithms. We present a point cloud cleaning framework that attempts to improve each of these aspects. First, we present a novel system architecture targeted point cloud segmentation. This architecture represents ‘layers’ of related points in a way that greatly reduces memory consumption and provides efficient set operations between layers. These set operations (union, difference, intersection) allow the creation of new layers which aid in the segmentation task. Next, we introduce roll-corrected 3D camera navigation that allows a user to look around freely while reducing disorientation. A user study showed that this camera mode significantly reduces a user´s navigation time between locations in a large point cloud thus accelerating point selection operations. Finally, we show how boosted random forests can be trained interactively, per scan, to assist users in a point cleaning task. To achieve interactivity, we sub-sample the training data on the fly and use efficient features adapted to the properties of range scans. Training and classification required 8-9s for point clouds up to 11 million points. Tests showed that a simple user-selected seed allowed walls to be recovered from tree and bush overgrowth with up to 92% accuracy (f-score). A preliminary user study showed that overall task time performance was improved. The study could however not confirm this result as statistically significant with 19 users. These results are, however, promising and suggest that even larger performance improvements are likely with more sophisticated features or the use of colour range images, which are now commonplace
Integrating data from 3D CAD and 3D cameras for Real-Time Modeling
In a reversal of historic trends, the capital facilities industry is expressing an increasing desire for automation of equipment and construction processes. Simultaneously, the industry has become conscious that higher levels of interoperability are a key towards higher productivity and safer projects. In complex, dynamic, and rapidly changing three-dimensional (3D) environments such as facilities sites, cutting-edge 3D sensing technologies and processing algorithms are one area of development that can dramatically impact those projects factors. New 3D technologies are now being developed, with among them 3D camera. The main focus here is an investigation of the feasibility of rapidly combining and comparing – integrating – 3D sensed data (from a 3D camera) and 3D CAD data. Such a capability could improve construction quality assessment, facility aging assessment, as well as rapid environment reconstruction and construction automation. Some preliminary results are presented here. They deal with the challenge of fusing sensed and CAD data that are completely different in nature
Atom chip for BEC interferometry
We have fabricated and tested an atom chip that operates as a matter wave interferometer. In this communication we describe the fabrication of the chip by ion-beam milling of gold evaporated onto a silicon substrate. We present data on the quality of the wires, on the current density that can be reached in the wires and on the smoothness of the magnetic traps that are formed. We demonstrate the operation of the interferometer, showing that we can coherently split and recombine a Bose–Einstein condensate with good phase stability
Recommended from our members
Automatic Detection of Clear-Sky Periods From Irradiance Data
Recent degradation studies have highlighted the importance of considering cloud cover when calculating degradation rates, finding more reliable values when the data are restricted to clear sky periods. Several automated methods of determining clear sky periods have been previously developed, but parameterizing and testing the models has been difficult. In this paper, we use clear sky classifications determined from satellite data to develop an algorithm that determines clear sky periods using only measured irradiance values and modeled clear sky irradiance as inputs. This method is tested on global horizontal irradiance (GHI) data from ground collectors at six sites across the United States and compared against independent satellite-based classifications. First, 30 separate models were optimized on each individual site at GHI data intervals of 1, 5, 10, 15, and 30 min (sampled on the first minute of the interval). The models had an average F0.5 score of 0.949 ± 0.035 on a holdout test set. Next, optimizations were performed by aggregating data from different locations at the same interval, yielding one model per data interval. This paper yielded an average F0.5 of 0.946 ± 0.037. A final, 'universal' optimization that was trained on data from all sites at all intervals provided an F0.5 score of 0.943 ± 0.040. The optimizations all provide improvements on a prior, unoptimized clear sky detection algorithm that produces F0.5 scores that average to 0.903 ± 0.067. Our paper indicates that a single algorithm can accurately classify clear sky periods across locations and data sampling intervals
- …