3,420 research outputs found

    Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks

    Get PDF
    © The Author(s), 2020. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Hopkinson, B. M., King, A. C., Owen, D. P., Johnson-Roberson, M., Long, M. H., & Bhandarkar, S. M. Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. PLoS One, 15(3), (2020): e0230671, doi: 10.1371/journal.pone.0230671.Coral reefs are biologically diverse and structurally complex ecosystems, which have been severally affected by human actions. Consequently, there is a need for rapid ecological assessment of coral reefs, but current approaches require time consuming manual analysis, either during a dive survey or on images collected during a survey. Reef structural complexity is essential for ecological function but is challenging to measure and often relegated to simple metrics such as rugosity. Recent advances in computer vision and machine learning offer the potential to alleviate some of these limitations. We developed an approach to automatically classify 3D reconstructions of reef sections and assessed the accuracy of this approach. 3D reconstructions of reef sections were generated using commercial Structure-from-Motion software with images extracted from video surveys. To generate a 3D classified map, locations on the 3D reconstruction were mapped back into the original images to extract multiple views of the location. Several approaches were tested to merge information from multiple views of a point into a single classification, all of which used convolutional neural networks to classify or extract features from the images, but differ in the strategy employed for merging information. Approaches to merging information entailed voting, probability averaging, and a learned neural-network layer. All approaches performed similarly achieving overall classification accuracies of ~96% and >90% accuracy on most classes. With this high classification accuracy, these approaches are suitable for many ecological applications.This study was funded by grants from the Alfred P. Sloan Foundation (BMH, BR2014-049; https://sloan.org), and the National Science Foundation (MHL, OCE-1657727; https://www.nsf.gov). The funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript

    Progress in industrial photogrammetry by means of markerless solutions

    Get PDF
    174 p.La siguiente tesis está enfocada al desarrollo y uso avanzado de metodologías fotogramétrica sin dianas en aplicaciones industriales. La fotogrametría es una técnica de medición óptica 3D que engloba múltiples configuraciones y aproximaciones. En este estudio se han desarrollado procedimientos de medición, modelos y estrategias de procesamiento de imagen que van más allá que la fotogrametría convencional y buscan el emplear soluciones de otros campos de la visión artificial en aplicaciones industriales. Mientras que la fotogrametría industrial requiere emplear dianas artificiales para definir los puntos o elementos de interés, esta tesis contempla la reducción e incluso la eliminación de las dianas tanto pasivas como activas como alternativas prácticas. La mayoría de los sistemas de medida utilizan las dianas tanto para definir los puntos de control, relacionar las distintas perspectivas, obtener precisión, así como para automatizar las medidas. Aunque en muchas situaciones el empleo de dianas no sea restrictivo existen aplicaciones industriales donde su empleo condiciona y restringe considerablemente los procedimientos de medida empleados en la inspección. Un claro ejemplo es la verificación y control de calidad de piezas seriadas, o la medición y seguimiento de elementos prismáticos relacionados con un sistema de referencia determinado. Es en este punto donde la fotogrametría sin dianas puede combinarse o complementarse con soluciones tradicionales para tratar de mejorar las prestaciones actuales

    Automatic Real-Time Pose Estimation of Machinery from Images

    Get PDF
    The automatic positioning of machines in a large number of application areas is an important aspect of automation. Today, this is often done using classic geodetic sensors such as Global Navigation Satellite Systems (GNSS) and robotic total stations. In this work, a stereo camera system was developed that localizes a machine at high frequency and serves as an alternative to the previously mentioned sensors. For this purpose, algorithms were developed that detect active markers on the machine in a stereo image pair, find stereo point correspondences, and estimate the pose of the machine from these. Theoretical influences and accuracies for different systems were estimated with a Monte Carlo simulation, on the basis of which the stereo camera system was designed. Field measurements were used to evaluate the actual achievable accuracies and the robustness of the prototype system. The comparison is present with reference measurements with a laser tracker. The estimated object pose achieved accuracies higher than [Formula: see text] with the translation components and accuracies higher than [Formula: see text] with the rotation components. As a result, 3D point accuracies higher than [Formula: see text] were achieved by the machine. For the first time, a prototype could be developed that represents an alternative, powerful image-based localization method for machines to the classical geodetic sensors

    Motorcycles that see: Multifocal stereo vision sensor for advanced safety systems in tilting vehicles

    Get PDF
    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications

    Stereo visual simultaneous localisation and mapping for an outdoor wheeled robot: a front-end study

    Get PDF
    For many mobile robotic systems, navigating an environment is a crucial step in autonomy and Visual Simultaneous Localisation and Mapping (vSLAM) has seen increased effective usage in this capacity. However, vSLAM is strongly dependent on the context in which it is applied, often using heuristic and special cases to provide efficiency and robustness. It is thus crucial to identify the important parameters and factors regarding a particular context as this heavily influences the necessary algorithms, processes, and hardware required for the best results. In this body of work, a generic front-end stereo vSLAM pipeline is tested in the context of a small-scale outdoor wheeled robot that occupies less than 1m3 of volume. The scale of the vehicle constrained the available processing power, Field Of View (FOV), actuation systems, and image distortions present. A dataset was collected with a custom platform that consisted of a Point Grey Bumblebee (Discontinued) stereo camera and Nvidia Jetson TK1 processor. A stereo front-end feature tracking framework was described and evaluated both in simulation and experimentally where appropriate. It was found that scale adversely affected lighting conditions, FOV, baseline, and processing power available, all crucial factors to improve upon. The stereo constraint was effective for robustness criteria, but ineffective in terms of processing power and metric reconstruction. An overall absolute odometer error of 0.25-3m was produced on the dataset but was unable to run in real-time

    Optical measurement of shape and deformation fields on challenging surfaces

    Get PDF
    A multiple-sensor optical shape measurement system (SMS) based on the principle of white-light fringe projection has been developed and commercialised by Loughborough University and Phase Vision Ltd for over 10 years. The use of the temporal phase unwrapping technique allows precise and dense shape measurements of complex surfaces; and the photogrammetry-based calibration technique offers the ability to calibrate multiple sensors simultaneously in order to achieve 360° measurement coverage. Nevertheless, to enhance the applicability of the SMS in industrial environments, further developments are needed (i) to improve the calibration speed for quicker deployment, (ii) to broaden the application range from shape measurement to deformation field measurement, and (iii) to tackle practically-challenging surfaces of which specular components may disrupt the acquired data and result in spurious measurements. The calibration process typically requires manual positioning of an artefact (i.e., reference object) at many locations within the view of the sensors. This is not only timeconsuming but also complicated for an operator with average knowledge of metrology. This thesis introduces an automated artefact positioning system which enables automatic and optimised distribution of the artefacts, automatic prediction of their whereabouts to increase the artefact detection speed and robustness, and thereby greater overall calibration performance. This thesis also describes a novel technique that integrates the digital image correlation (DIC) technique into the present fringe projection SMS for the purpose of simultaneous shape and deformation field measurement. This combined technique offers three key advantages: (a) the ability to deal with geometrical discontinuities which are commonly present on mechanical surfaces and currently challenging to most deformation measurement methods, (b) the ability to measure 3D displacement fields with a basic single-camera single-projector SMS with no additional hardware components, and (c) the simple implementation on a multiple-sensor hardware platform to achieve complete coverage of large-scale and complex samples, with the resulting displacement fields automatically lying in a single global coordinate system. A displacement measurement iii accuracy of ≅1/12,000 of the measurement volume, which is comparable to that of an industry-standard DIC system, has been achieved. The applications of this novel technique to several structural tests of aircraft wing panels on-site at the research centre of Airbus UK in Filton are also presented. Mechanical components with shiny surface finish and complex geometry may introduce another challenge to present fringe projection techniques. In certain circumstances, multiple reflections of the projected fringes on an object surface may cause ambiguity in the phase estimation process and result in incorrect coordinate measurements. This thesis presents a new technique which adopts a Fourier domain ranging (FDR) method to correctly identifying multiple phase signals and enables unambiguous triangulation for a measured coordinate. Experiments of the new FDR technique on various types of surfaces have shown promising results as compared to the traditional phase unwrapping techniques

    Evaluation of the controls affecting the quality of spatial data derived from historical aerial photographs

    Get PDF
    This paper is concerned with the fundamental controls affecting the quality of data derived from historical aerial photographs typically used in geomorphological studies. A short review is provided of error sources introduced into the photogrammetric workflow. Data-sets from two case-studies provided a variety of source data and hence a good opportunity to evaluate the influence of the quality of archival material on the accuracy of coordinated points. Based on the statistical weights assigned to the measurements, precision of the data was estimated a priori, while residuals of independent checkpoints provided an a posteriori measure of data accuracy. Systematic discrepancies between the two values indicated that the routinely used stochastic model was incorrect and overoptimistic. Optimized weighting factors appeared significantly larger than previously used (and accepted) values. A test of repeat measurements explained the large uncertainties associated with the use of natural objects for ground control. This showed that the random errors not only appeared to be much larger than values accepted for appropriately controlled and targeted photogrammetric networks, but also small undetected gross errors were induced through the ‘misidentification’ of points. It is suggested that the effects of such ‘misidentifications’ should be reflected in the stochastic model through selection of more realistic weighting factors of both image and ground measurements. Using the optimized weighting factors, the accuracy of derived data can now be more truly estimated, allowing the suitability of the imagery to be judged before purchase and processing
    • …
    corecore