351 research outputs found

    Automated Pavement Crack Segmentation Using U-Net-based Convolutional Neural Network

    Full text link
    Automated pavement crack image segmentation is challenging because of inherent irregular patterns, lighting conditions, and noise in images. Conventional approaches require a substantial amount of feature engineering to differentiate crack regions from non-affected regions. In this paper, we propose a deep learning technique based on a convolutional neural network to perform segmentation tasks on pavement crack images. Our approach requires minimal feature engineering compared to other machine learning techniques. We propose a U-Net-based network architecture in which we replace the encoder with a pretrained ResNet-34 neural network. We use a "one-cycle" training schedule based on cyclical learning rates to speed up the convergence. Our method achieves an F1 score of 96% on the CFD dataset and 73% on the Crack500 dataset, outperforming other algorithms tested on these datasets. We perform ablation studies on various techniques that helped us get marginal performance boosts, i.e., the addition of spatial and channel squeeze and excitation (SCSE) modules, training with gradually increasing image sizes, and training various neural network layers with different learning rates.Comment: Accepted for publication in IEEE Acces

    Variational methods and its applications to computer vision

    Get PDF
    Many computer vision applications such as image segmentation can be formulated in a ''variational'' way as energy minimization problems. Unfortunately, the computational task of minimizing these energies is usually difficult as it generally involves non convex functions in a space with thousands of dimensions and often the associated combinatorial problems are NP-hard to solve. Furthermore, they are ill-posed inverse problems and therefore are extremely sensitive to perturbations (e.g. noise). For this reason in order to compute a physically reliable approximation from given noisy data, it is necessary to incorporate into the mathematical model appropriate regularizations that require complex computations. The main aim of this work is to describe variational segmentation methods that are particularly effective for curvilinear structures. Due to their complex geometry, classical regularization techniques cannot be adopted because they lead to the loss of most of low contrasted details. In contrast, the proposed method not only better preserves curvilinear structures, but also reconnects some parts that may have been disconnected by noise. Moreover, it can be easily extensible to graphs and successfully applied to different types of data such as medical imagery (i.e. vessels, hearth coronaries etc), material samples (i.e. concrete) and satellite signals (i.e. streets, rivers etc.). In particular, we will show results and performances about an implementation targeting new generation of High Performance Computing (HPC) architectures where different types of coprocessors cooperate. The involved dataset consists of approximately 200 images of cracks, captured in three different tunnels by a robotic machine designed for the European ROBO-SPECT project.Open Acces

    Automatic surface defect quantification in 3D

    Get PDF
    Three-dimensional (3D) non-contact optical methods for surface inspection are of significant interest to many industrial sectors. Many aspects of manufacturing processes have become fully automated resulting in high production volumes. However, this is not necessarily the case for surface defect inspection. Existing human visual analysis of surface defects is qualitative and subject to varying interpretation. Automated 3D non-contact analysis should provide a robust and systematic quantitative approach. However, different 3D optical measurement technologies use different physical principles, interact with surfaces and defects in diverse ways, leading to variation in measurement data. Instrument s native software processing of the data may be non-traceable in nature, leading to significant uncertainty about data quantisation. Sub-millimetric level surface defect artefacts have been created using Rockwell and Vickers hardness testing equipment on various substrates. Four different non-contact surface measurement instruments (Alicona InfiniteFocus G4, Zygo NewView 5000, GFM MikroCAD Lite and Heliotis H3) have been utilized to measure different defect artefacts. The four different 3D optical instruments are evaluated by calibrated step-height created using slipgauges and reference defect artefacts. The experimental results are compared to select the most suitable instrument capable of measuring surface defects in robust manner. This research has identified a need for an automatic tool to quantify surface defect and thus a mathematical solution has been implemented for automatic defect detection and quantification (depth, area and volume) in 3D. A simulated defect softgauge with a known geometry has been developed in order to verify the implemented algorithm and provide mathematical traceability. The implemented algorithm has been identified as a traceable, highly repeatable, and high speed solution to quantify surface defect in 3D. Various industrial components with suspicious features and solder joints on PCB are measured and quantified in order to demonstrate applicability

    Damage volumetric assessment and digital twin synchronization based on LiDAR point clouds

    Get PDF
    Point clouds are widely used for structure inspection and can provide damage spatial information. However, how to update a digital twin (DT) with local damage based on point clouds has not been sufficiently studied. This research presents an efficient framework for assessing and DT synchronizing local damage on a planar surface using point clouds. The pipeline starts from damage detection via DeepLabV3+ on the pseudo grayscale images from the point depth. It avoids the drawbacks of image and point cloud fusion. The target point cloud is separated according to the detected damage. Then, it can be converted into a 3D binary matrix through voxelization and binarization, which is highly lightweight and can be losslessly compressed for DT synchronization. The framework is validated via two case studies, demonstrating that the proposed voxel-based method can be easily applied to real-world damage with non-convex geometry instead of convex-hull fitting; finite-element (FE) models and BIM models can be updated automatically through the framework

    A Routine and Post-disaster Road Corridor Monitoring Framework for the Increased Resilience of Road Infrastructures

    Get PDF

    Road Condition Mapping by Integration of Laser Scanning, RGB Imaging and Spectrometry

    Get PDF
    Roads are important infrastructure and are primary means of transportation. Control and maintenance of roads are substantial as the pavement surface deforms and deteriorates due to heavy load and influences of weather. Acquiring detailed information about the pavement condition is a prerequisite for proper planning of road pavement maintenance and rehabilitation. Many companies detect and localize the road pavement distresses manually, either by on-site inspection or by digitizing laser data and imagery captured by mobile mapping. The automation of road condition mapping using laser data and colour images is a challenge. Beyond that, the mapping of material properties of the road pavement surface with spectrometers has not yet been investigated. This study aims at automatic mapping of road surface condition including distress and material properties by integrating laser scanning, RGB imaging and spectrometry. All recorded data are geo-referenced by means of GNSS/ INS. Methods are developed for pavement distress detection that cope with a variety of different weather and asphalt conditions. Further objective is to analyse and map the material properties of the pavement surface using spectrometry data. No standard test data sets are available for benchmarking developments on road condition mapping. Therefore, all data have been recorded with a mobile mapping van which is set up for the purpose of this research. The concept for detecting and localizing the four main pavement distresses, i.e. ruts, potholes, cracks and patches is the following: ruts and potholes are detected using laser scanning data, cracks and patches using RGB images. For each of these pavement distresses, two or more methods are developed, implemented, compared to each other and evaluated to identify the most successful method. With respect to the material characteristics, spectrometer data of road sections are classified to indicate pavement quality. As a spectrometer registers almost a reflectivity curve in VIS, NIR and SWIR wavelength, indication of aging can be derived. After detection and localization of the pavement distresses and pavement quality classes, the road condition map is generated by overlaying all distresses and quality classes. As a preparatory step for rut and pothole detection, the road surface is extracted from mobile laser scanning data based on a height jump criterion. For the investigation on rut detection, all scanlines are processed. With an approach based on iterative 1D polynomial fitting, ruts are successfully detected. For streets with the width of 6 m to 10 m, a 6th order polynomial is found to be most suitable. By 1D cross-correlation, the centre of the rut is localized. An alternative method using local curvature shows a high sensitivity to the shape and width of a rut and is less successful. For pothole detection, the approach based on polynomial fitting generalized to two dimensions. As an alternative, a procedure using geodesic morphological reconstruction is investigated. Bivariate polynomial fitting encounters problems with overshoot at the boundary of the regions. The detection is very successful using geodesic morphology. For the detection of pavement cracks, three methods using rotation invariant kernels are investigated. Line Filter, High-pass Filter and Modified Local Binary Pattern kernels are implemented. A conceptual aspect of the procedure is to achieve a high degree of completeness. The most successful variant is the Line Filter for which the highest degree of completeness of 81.2 % is achieved. Two texture measures, the gradient magnitude and the local standard deviation are employed to detect pavement patches. As patches may differ with respect to homogeneity and may not always have a dark border with the intact pavement surface, the method using the local standard deviation is more suitable for detecting the patches. Linear discriminant analysis is utilized for asphalt pavement quality analysis and classification. Road pavement sections of ca. 4 m length are classified into two classes, namely: “Good” and “Bad” with the overall accuracy of 77.6 %. The experimental investigations show that the developed methods for automatic distress detection are very successful. By 1D polynomial fitting on laser scanlines, ruts are detected. In addition to ruts also pavement depressions like shoving can be revealed. The extraction of potholes is less demanding. As potholes appear relatively rare in the road networks of a city, the road segments which are affected by potholes are selected interactively. While crack detection by Line Filter works very well, the patch detection is more challenging as patches sometimes look very similar to the intact surface. The spectral classification of pavement sections contributes to road condition mapping as it gives hints on aging of the road pavement.Straßen bilden die primären Transportwege für Personen und Güter und sind damit ein wichtiger Bestandteil der Infrastruktur. Der Aufwand für Instandhaltung und Wartung der Straßen ist erheblich, da sich die Fahrbahnoberfläche verformt und durch starke Belastung und Wettereinflüsse verschlechtert. Die Erfassung detaillierter Informationen über den Fahrbahnzustand ist Voraussetzung für eine sachgemäße Planung der Fahrbahnsanierung und -rehabilitation. Viele Unternehmen detektieren und lokalisieren die Fahrbahnschäden manuell entweder durch Vor-Ort-Inspektion oder durch Digitalisierung von Laserdaten und Bildern aus mobiler Datenerfassung. Eine Automatisierung der Straßenkartierung mit Laserdaten und Farbbildern steht noch in den Anfängen. Zudem werden bisher noch nicht die Alterungszustände der Asphaltdecke mit Hilfe der Spektrometrie bewertet. Diese Studie zielt auf den automatischen Prozess der Straßenzustandskartierung einschließlich der Straßenschäden und der Materialeigenschaften durch Integration von Laserscanning, RGB-Bilderfassung und Spektrometrie ab. Alle aufgezeichneten Daten werden mit GNSS / INS georeferenziert. Es werden Methoden für die Erkennung von Straßenschäden entwickelt, die sich an unterschiedliche Datenquellen bei unterschiedlichem Wetter- und Asphaltzustand anpassen können. Ein weiteres Ziel ist es, die Materialeigenschaften der Fahrbahnoberfläche mittels Spektrometrie-Daten zu analysieren und abzubilden. Derzeit gibt es keine standardisierten Testdatensätze für die Evaluierung von Verfahren zur Straßenzustandsbeschreibung. Deswegen wurden alle Daten, die in dieser Studie Verwendung finden, mit einem eigens für diesen Forschungszweck konfigurierten Messfahrzeug aufgezeichnet. Das Konzept für die Detektion und Lokalisierung der wichtigsten vier Arten von Straßenschäden, nämlich Spurrillen, Schlaglöcher, Risse und Flickstellen ist das folgende: Spurrillen und Schlaglöcher werden aus Laserdaten extrahiert, Risse und Flickstellen aus RGB- Bildern. Für jede dieser Straßenschäden werden mindestens zwei Methoden entwickelt, implementiert, miteinander verglichen und evaluiert um festzustellen, welche Methode die erfolgreichste ist. Im Hinblick auf die Materialeigenschaften werden Spektrometriedaten der Straßenabschnitte klassifiziert, um die Qualität des Straßenbelages zu bewerten. Da ein Spektrometer nahezu eine kontinuierliche Reflektivitätskurve im VIS-, NIR- und SWIR-Wellenlängenbereich aufzeichnet, können Merkmale der Asphaltalterung abgeleitet werden. Nach der Detektion und Lokalisierung der Straßenschäden und der Qualitätsklasse des Straßenbelages wird der übergreifende Straßenzustand mit Hilfe von Durchschlagsregeln als Kombination aller Zustandswerte und Qualitätsklassen ermittelt. In einem vorbereitenden Schritt für die Spurrillen- und Schlaglocherkennung wird die Straßenoberfläche aus mobilen Laserscanning-Daten basierend auf einem Höhensprung-Kriterium extrahiert. Für die Untersuchung zur Spurrillen-Erkennung werden alle Scanlinien verarbeitet. Mit einem Ansatz, der auf iterativer 1D-Polynomanpassung basiert, werden Spurrillen erfolgreich erkannt. Für eine Straßenbreite von 8-10m erweist sich ein Polynom sechsten Grades als am besten geeignet. Durch 1D-Kreuzkorrelation wird die Mitte der Spurrille erkannt. Eine alternative Methode, die die lokale Krümmung des Querprofils benutzt, erweist sich als empfindlich gegenüber Form und Breite einer Spurrille und ist weniger erfolgreich. Zur Schlaglocherkennung wird der Ansatz, der auf Polynomanpassung basiert, auf zwei Dimensionen verallgemeinert. Als Alternative wird eine Methode untersucht, die auf der Geodätischen Morphologischen Rekonstruktion beruht. Bivariate Polynomanpassung führt zu Überschwingen an den Rändern der Regionen. Die Detektion mit Hilfe der Geodätischen Morphologischen Rekonstruktion ist dagegen sehr erfolgreich. Zur Risserkennung werden drei Methoden untersucht, die rotationsinvariante Kerne verwenden. Linienfilter, Hochpassfilter und Lokale Binäre Muster werden implementiert. Ein Ziel des Konzeptes zur Risserkennung ist es, eine hohe Vollständigkeit zu erreichen. Die erfolgreichste Variante ist das Linienfilter, für das mit 81,2 % der höchste Grad an Vollständigkeit erzielt werden konnte. Zwei Texturmaße, nämlich der Betrag des Grauwert-Gradienten und die lokale Standardabweichung werden verwendet, um Flickstellen zu entdecken. Da Flickstellen hinsichtlich der Homogenität variieren können und nicht immer eine dunkle Grenze mit dem intakten Straßenbelag aufweisen, ist diejenige Methode, welche die lokale Standardabweichung benutzt, besser zur Erkennung von Flickstellen geeignet. Lineare Diskriminanzanalyse wird zur Analyse der Asphaltqualität und zur Klassifikation benutzt. Straßenabschnitte von ca. 4m Länge werden zwei Klassen („Gut“ und „Schlecht“) mit einer gesamten Accuracy von 77,6 % zugeordnet. Die experimentellen Untersuchungen zeigen, dass die entwickelten Methoden für die automatische Entdeckung von Straßenschäden sehr erfolgreich sind. Durch 1D Polynomanpassung an Laser-Scanlinien werden Spurrillen entdeckt. Zusätzlich zu Spurrillen werden auch Unebenheiten des Straßenbelages wie Aufschiebungen detektiert. Die Extraktion von Schlaglöchern ist weniger anspruchsvoll. Da Schlaglöcher relativ selten in den Straßennetzen von Städten auftreten, werden die Straßenabschnitte mit Schlaglöchern interaktiv ausgewählt. Während die Rissdetektion mit Linienfiltern sehr gut funktioniert, ist die Erkennung von Flickstellen eine größere Herausforderung, da Flickstellen manchmal der intakten Straßenoberfläche sehr ähnlich sehen. Die spektrale Klassifizierung der Straßenabschnitte trägt zur Straßenzustandsbewertung bei, indem sie Hinweise auf den Alterungszustand des Straßenbelages liefert

    Analysis and evaluation of fragment size distributions in rock blasting at the Erdenet Mine

    Get PDF
    Master's Project (M.S.) University of Alaska Fairbanks, 2015Rock blasting is one of the most important operations in mining. It significantly affects the subsequent comminution processes and, therefore, is critical to successful mining productions. In this study, for the evaluation of the blasting performance at the Erdenet Mine, we analyzed rock fragment size distributions with the digital image processing method. The uniformities of rock fragments and the mean fragment sizes were determined and applied in the Kuz-Ram model. Statistical prediction models were also developed based on the field measured parameters. The results were compared with the Kuz-Ram model predictions and the digital image processing measurements. A total of twenty-eight images from eleven blasting patterns were processed, and rock size distributions were determined by Split-Desktop program in this study. Based on the rock mass and explosive properties and the blasting parameters, the rock fragment size distributions were also determined with the Kuz-Ram model and compared with the measurements by digital image processing. Furthermore, in order to improve the prediction of rock fragment size distributions at the mine, regression analyses were conducted and statistical models w ere developed for the estimation of the uniformity and characteristic size. The results indicated that there were discrepancies between the digital image measurements and those estimated by the Kuz-Ram model. The uniformity indices of image processing measurements varied from 0.76 to 1.90, while those estimate by the Kuz-Ram model were from 1.07 to 1.13. The mean fragment size of the Kuz-Ram model prediction was 97.59% greater than the mean fragment size of the image processing. The multivariate nonlinear regression analyses conducted in this study indicated that rock uniaxial compressive strength and elastic modulus, explosive energy input in the blasting, bench height to burden ratio and blast area per hole were significant predictor variables in determining the fragment characteristic size and the uniformity index. The regression models developed based on the above predictor variables showed much closer agreement with the measurements

    ITErRoot: High Throughput Segmentation of 2-Dimensional Root System Architecture

    Get PDF
    Root system architecture (RSA) analysis is a form of high-throughput plant phenotyping which has recently benefited from the application of various deep learning techniques. A typical RSA pipeline includes a segmentation step, where the root system is extracted from 2D images. The segmented image is then passed to subsequent steps for processing, which result in some representation of the architectural properties of the root system. This representation is then used for trait computation, which can be used to identify various desirable properties of a plant’s RSA. Errors which arise at the segmentation stage can propagate themselves throughout the remainder of the pipeline and impact results of trait analysis. This work aims to design an iterative neural network architecture, called ITErRoot, which is particularly well suited to the segmentation of root structure from 2D images in the presence of non-root objects. A novel 2D root image dataset is created along with a ground truth annotation tool designed to facilitate consistent manual annotation of RSA. The proposed architecture is able to take advantage of the root structure to obtain a high quality segmentation and is generalizable to root systems with thin roots, showing improved quality over recent approaches to RSA segmentation. We provide rigorous analysis designed to identify the strengths and weaknesses of the proposed model as well as to validate the effectiveness of the approach for producing high-quality segmentations
    corecore