516,908 research outputs found

    LiDARgrammetry: A New Method for Generating Synthetic Stereoscopic Products from Digital Elevation Models

    Get PDF
    There are currently several new technologies being used to generate digital elevation models that do not use photogrammetric techniques. For example, LiDAR (Laser Imaging Detection and Ranging) and RADAR (RAdio Detection And Ranging) can generate 3D points and reflectivity information of the surface without using a photogrammetric approach. In the case of LiDAR, the intensity level indicates the amount of energy that the object reflects after a laser pulse is transmitted. This energy mainly depends on the material and the wavelength used by LiDAR. This intensity level can be used to generate a synthetic image colored by this attribute (intensity level), which can be viewed as a RGB (red, green and blue) picture. This work presents the outline of an innovative method, designed by the authors, to generate synthetic pictures from point clouds to use in classical photogrammetric software (digital restitution or stereoscopic vision). This is conducted using available additional information (for example, the intensity level of LiDAR). This allows mapping operators to view the LiDAR as if it were stereo-imagery, so they can manually digitize points, 3D lines, break lines, polygons and so on

    Range 7 Scanner Integration with PaR Robot Scanning System

    Get PDF
    An interface bracket and coordinate transformation matrices were designed to allow the Range 7 scanner to be mounted on the PaR Robot detector arm for scanning the heat shield or other object placed in the test cell. A process was designed for using Rapid Form XOR to stitch data from multiple scans together to provide an accurate 3D model of the object scanned. An accurate model was required for the design and verification of an existing heat shield. The large physical size and complex shape of the heat shield does not allow for direct measurement of certain features in relation to other features. Any imaging devices capable of imaging the entire heat shield in its entirety suffers a reduced resolution and cannot image sections that are blocked from view. Prior methods involved tools such as commercial measurement arms, taking images with cameras, then performing manual measurements. These prior methods were tedious and could not provide a 3D model of the object being scanned, and were typically limited to a few tens of measurement points at prominent locations. Integration of the scanner with the robot allows for large complex objects to be scanned at high resolution, and for 3D Computer Aided Design (CAD) models to be generated for verification of items to the original design, and to generate models of previously undocumented items. The main components are the mounting bracket for the scanner to the robot and the coordinate transformation matrices used for stitching the scanner data into a 3D model. The steps involve mounting the interface bracket to the robot's detector arm, mounting the scanner to the bracket, and then scanning sections of the object and recording the location of the tool tip (in this case the center of the scanner's focal point). A novel feature is the ability to stitch images together by coordinates instead of requiring each scan data set to have overlapping identifiable features. This setup allows models of complex objects to be developed even if the object is large and featureless, or has sections that don't have visibility to other parts of the object for use as a reference. In addition, millions of points can be used for creation of an accurate model [i.e. within 0.03 in. (=0.8 mm) over a span of 250 in. (=635 mm)]

    On the use of uavs in mining and archaeology - geo-accurate 3d reconstructions using various platforms and terrestrial views

    Get PDF
    During the last decades photogrammetric computer vision systems have been well established in scientific and commercial applications. Especially the increasing affordability of unmanned aerial vehicles (UAVs) in conjunction with automated multi-view processing pipelines have resulted in an easy way of acquiring spatial data and creating realistic and accurate 3D models. With the use of multicopter UAVs, it is possible to record highly overlapping images from almost terrestrial camera positions to oblique and nadir aerial images due to the ability to navigate slowly, hover and capture images at nearly any possible position. Multi-copter UAVs thus are bridging the gap between terrestrial and traditional aerial image acquisition and are therefore ideally suited to enable easy and safe data collection and inspection tasks in complex or hazardous environments. In this paper we present a fully automated processing pipeline for precise, metric and geo-accurate 3D reconstructions of complex geometries using various imaging platforms. Our workflow allows for georeferencing of UAV imagery based on GPS-measurements of camera stations from an on-board GPS receiver as well as tie and control point information. Ground control points (GCPs) are integrated directly in the bundle adjustment to refine the georegistration and correct for systematic distortions of the image block. We discuss our approach based on three different case studies for applications in mining and archaeology and present several accuracy related analyses investigating georegistration, camera network configuration and ground sampling distance. Our approach is furthermore suited for seamlessly matching and integrating images from different view points and cameras (aerial and terrestrial as well as inside views) into one single reconstruction. Together with aerial images from a UAV, we are able to enrich 3D models by combining terrestrial images as well inside views of an object by joint image processing to generate highly detailed, accurate and complete reconstructions

    What is Holding Back Convnets for Detection?

    Full text link
    Convolutional neural networks have recently shown excellent results in general object detection and many other tasks. Albeit very effective, they involve many user-defined design choices. In this paper we want to better understand these choices by inspecting two key aspects "what did the network learn?", and "what can the network learn?". We exploit new annotations (Pascal3D+), to enable a new empirical analysis of the R-CNN detector. Despite common belief, our results indicate that existing state-of-the-art convnet architectures are not invariant to various appearance factors. In fact, all considered networks have similar weak points which cannot be mitigated by simply increasing the training data (architectural changes are needed). We show that overall performance can improve when using image renderings for data augmentation. We report the best known results on the Pascal3D+ detection and view-point estimation tasks

    Experiences modelling and using object-oriented telecommunication service frameworks in SDL

    Get PDF
    This paper describes experiences in using SDL and its associated tools to create telecommunication services by producing and specialising object-oriented frameworks. The chosen approach recognises the need for the rapid creation of validated telecommunication services. It introduces two stages to service creation. Firstly a software expert produces a service framework, and secondly a telecommunications ‘business consultant' specialises the framework by means of graphical tools to rapidly produce services. Here the focus is given to the underlying technology required. In particular, the advantages and disadvantages of SDL and tools for this purpose are highlighted
    • 

    corecore