73 research outputs found

    Road Condition Mapping by Integration of Laser Scanning, RGB Imaging and Spectrometry

    Get PDF
    Roads are important infrastructure and are primary means of transportation. Control and maintenance of roads are substantial as the pavement surface deforms and deteriorates due to heavy load and influences of weather. Acquiring detailed information about the pavement condition is a prerequisite for proper planning of road pavement maintenance and rehabilitation. Many companies detect and localize the road pavement distresses manually, either by on-site inspection or by digitizing laser data and imagery captured by mobile mapping. The automation of road condition mapping using laser data and colour images is a challenge. Beyond that, the mapping of material properties of the road pavement surface with spectrometers has not yet been investigated. This study aims at automatic mapping of road surface condition including distress and material properties by integrating laser scanning, RGB imaging and spectrometry. All recorded data are geo-referenced by means of GNSS/ INS. Methods are developed for pavement distress detection that cope with a variety of different weather and asphalt conditions. Further objective is to analyse and map the material properties of the pavement surface using spectrometry data. No standard test data sets are available for benchmarking developments on road condition mapping. Therefore, all data have been recorded with a mobile mapping van which is set up for the purpose of this research. The concept for detecting and localizing the four main pavement distresses, i.e. ruts, potholes, cracks and patches is the following: ruts and potholes are detected using laser scanning data, cracks and patches using RGB images. For each of these pavement distresses, two or more methods are developed, implemented, compared to each other and evaluated to identify the most successful method. With respect to the material characteristics, spectrometer data of road sections are classified to indicate pavement quality. As a spectrometer registers almost a reflectivity curve in VIS, NIR and SWIR wavelength, indication of aging can be derived. After detection and localization of the pavement distresses and pavement quality classes, the road condition map is generated by overlaying all distresses and quality classes. As a preparatory step for rut and pothole detection, the road surface is extracted from mobile laser scanning data based on a height jump criterion. For the investigation on rut detection, all scanlines are processed. With an approach based on iterative 1D polynomial fitting, ruts are successfully detected. For streets with the width of 6 m to 10 m, a 6th order polynomial is found to be most suitable. By 1D cross-correlation, the centre of the rut is localized. An alternative method using local curvature shows a high sensitivity to the shape and width of a rut and is less successful. For pothole detection, the approach based on polynomial fitting generalized to two dimensions. As an alternative, a procedure using geodesic morphological reconstruction is investigated. Bivariate polynomial fitting encounters problems with overshoot at the boundary of the regions. The detection is very successful using geodesic morphology. For the detection of pavement cracks, three methods using rotation invariant kernels are investigated. Line Filter, High-pass Filter and Modified Local Binary Pattern kernels are implemented. A conceptual aspect of the procedure is to achieve a high degree of completeness. The most successful variant is the Line Filter for which the highest degree of completeness of 81.2 % is achieved. Two texture measures, the gradient magnitude and the local standard deviation are employed to detect pavement patches. As patches may differ with respect to homogeneity and may not always have a dark border with the intact pavement surface, the method using the local standard deviation is more suitable for detecting the patches. Linear discriminant analysis is utilized for asphalt pavement quality analysis and classification. Road pavement sections of ca. 4 m length are classified into two classes, namely: “Good” and “Bad” with the overall accuracy of 77.6 %. The experimental investigations show that the developed methods for automatic distress detection are very successful. By 1D polynomial fitting on laser scanlines, ruts are detected. In addition to ruts also pavement depressions like shoving can be revealed. The extraction of potholes is less demanding. As potholes appear relatively rare in the road networks of a city, the road segments which are affected by potholes are selected interactively. While crack detection by Line Filter works very well, the patch detection is more challenging as patches sometimes look very similar to the intact surface. The spectral classification of pavement sections contributes to road condition mapping as it gives hints on aging of the road pavement.Straßen bilden die primären Transportwege für Personen und Güter und sind damit ein wichtiger Bestandteil der Infrastruktur. Der Aufwand für Instandhaltung und Wartung der Straßen ist erheblich, da sich die Fahrbahnoberfläche verformt und durch starke Belastung und Wettereinflüsse verschlechtert. Die Erfassung detaillierter Informationen über den Fahrbahnzustand ist Voraussetzung für eine sachgemäße Planung der Fahrbahnsanierung und -rehabilitation. Viele Unternehmen detektieren und lokalisieren die Fahrbahnschäden manuell entweder durch Vor-Ort-Inspektion oder durch Digitalisierung von Laserdaten und Bildern aus mobiler Datenerfassung. Eine Automatisierung der Straßenkartierung mit Laserdaten und Farbbildern steht noch in den Anfängen. Zudem werden bisher noch nicht die Alterungszustände der Asphaltdecke mit Hilfe der Spektrometrie bewertet. Diese Studie zielt auf den automatischen Prozess der Straßenzustandskartierung einschließlich der Straßenschäden und der Materialeigenschaften durch Integration von Laserscanning, RGB-Bilderfassung und Spektrometrie ab. Alle aufgezeichneten Daten werden mit GNSS / INS georeferenziert. Es werden Methoden für die Erkennung von Straßenschäden entwickelt, die sich an unterschiedliche Datenquellen bei unterschiedlichem Wetter- und Asphaltzustand anpassen können. Ein weiteres Ziel ist es, die Materialeigenschaften der Fahrbahnoberfläche mittels Spektrometrie-Daten zu analysieren und abzubilden. Derzeit gibt es keine standardisierten Testdatensätze für die Evaluierung von Verfahren zur Straßenzustandsbeschreibung. Deswegen wurden alle Daten, die in dieser Studie Verwendung finden, mit einem eigens für diesen Forschungszweck konfigurierten Messfahrzeug aufgezeichnet. Das Konzept für die Detektion und Lokalisierung der wichtigsten vier Arten von Straßenschäden, nämlich Spurrillen, Schlaglöcher, Risse und Flickstellen ist das folgende: Spurrillen und Schlaglöcher werden aus Laserdaten extrahiert, Risse und Flickstellen aus RGB- Bildern. Für jede dieser Straßenschäden werden mindestens zwei Methoden entwickelt, implementiert, miteinander verglichen und evaluiert um festzustellen, welche Methode die erfolgreichste ist. Im Hinblick auf die Materialeigenschaften werden Spektrometriedaten der Straßenabschnitte klassifiziert, um die Qualität des Straßenbelages zu bewerten. Da ein Spektrometer nahezu eine kontinuierliche Reflektivitätskurve im VIS-, NIR- und SWIR-Wellenlängenbereich aufzeichnet, können Merkmale der Asphaltalterung abgeleitet werden. Nach der Detektion und Lokalisierung der Straßenschäden und der Qualitätsklasse des Straßenbelages wird der übergreifende Straßenzustand mit Hilfe von Durchschlagsregeln als Kombination aller Zustandswerte und Qualitätsklassen ermittelt. In einem vorbereitenden Schritt für die Spurrillen- und Schlaglocherkennung wird die Straßenoberfläche aus mobilen Laserscanning-Daten basierend auf einem Höhensprung-Kriterium extrahiert. Für die Untersuchung zur Spurrillen-Erkennung werden alle Scanlinien verarbeitet. Mit einem Ansatz, der auf iterativer 1D-Polynomanpassung basiert, werden Spurrillen erfolgreich erkannt. Für eine Straßenbreite von 8-10m erweist sich ein Polynom sechsten Grades als am besten geeignet. Durch 1D-Kreuzkorrelation wird die Mitte der Spurrille erkannt. Eine alternative Methode, die die lokale Krümmung des Querprofils benutzt, erweist sich als empfindlich gegenüber Form und Breite einer Spurrille und ist weniger erfolgreich. Zur Schlaglocherkennung wird der Ansatz, der auf Polynomanpassung basiert, auf zwei Dimensionen verallgemeinert. Als Alternative wird eine Methode untersucht, die auf der Geodätischen Morphologischen Rekonstruktion beruht. Bivariate Polynomanpassung führt zu Überschwingen an den Rändern der Regionen. Die Detektion mit Hilfe der Geodätischen Morphologischen Rekonstruktion ist dagegen sehr erfolgreich. Zur Risserkennung werden drei Methoden untersucht, die rotationsinvariante Kerne verwenden. Linienfilter, Hochpassfilter und Lokale Binäre Muster werden implementiert. Ein Ziel des Konzeptes zur Risserkennung ist es, eine hohe Vollständigkeit zu erreichen. Die erfolgreichste Variante ist das Linienfilter, für das mit 81,2 % der höchste Grad an Vollständigkeit erzielt werden konnte. Zwei Texturmaße, nämlich der Betrag des Grauwert-Gradienten und die lokale Standardabweichung werden verwendet, um Flickstellen zu entdecken. Da Flickstellen hinsichtlich der Homogenität variieren können und nicht immer eine dunkle Grenze mit dem intakten Straßenbelag aufweisen, ist diejenige Methode, welche die lokale Standardabweichung benutzt, besser zur Erkennung von Flickstellen geeignet. Lineare Diskriminanzanalyse wird zur Analyse der Asphaltqualität und zur Klassifikation benutzt. Straßenabschnitte von ca. 4m Länge werden zwei Klassen („Gut“ und „Schlecht“) mit einer gesamten Accuracy von 77,6 % zugeordnet. Die experimentellen Untersuchungen zeigen, dass die entwickelten Methoden für die automatische Entdeckung von Straßenschäden sehr erfolgreich sind. Durch 1D Polynomanpassung an Laser-Scanlinien werden Spurrillen entdeckt. Zusätzlich zu Spurrillen werden auch Unebenheiten des Straßenbelages wie Aufschiebungen detektiert. Die Extraktion von Schlaglöchern ist weniger anspruchsvoll. Da Schlaglöcher relativ selten in den Straßennetzen von Städten auftreten, werden die Straßenabschnitte mit Schlaglöchern interaktiv ausgewählt. Während die Rissdetektion mit Linienfiltern sehr gut funktioniert, ist die Erkennung von Flickstellen eine größere Herausforderung, da Flickstellen manchmal der intakten Straßenoberfläche sehr ähnlich sehen. Die spektrale Klassifizierung der Straßenabschnitte trägt zur Straßenzustandsbewertung bei, indem sie Hinweise auf den Alterungszustand des Straßenbelages liefert

    Multispectral Imaging For Face Recognition Over Varying Illumination

    Get PDF
    This dissertation addresses the advantage of using multispectral narrow-band images over conventional broad-band images for improved face recognition under varying illumination. To verify the effectiveness of multispectral images for improving face recognition performance, three sequential procedures are taken into action: multispectral face image acquisition, image fusion for multispectral and spectral band selection to remove information redundancy. Several efficient image fusion algorithms are proposed and conducted on spectral narrow-band face images in comparison to conventional images. Physics-based weighted fusion and illumination adjustment fusion make good use of spectral information in multispectral imaging process. The results demonstrate that fused narrow-band images outperform the conventional broad-band images under varying illuminations. In the case where multispectral images are acquired over severe changes in daylight, the fused images outperform conventional broad-band images by up to 78%. The success of fusing multispectral images lies in the fact that multispectral images can separate the illumination information from the reflectance of objects which is impossible for conventional broad-band images. To reduce the information redundancy among multispectral images and simplify the imaging system, distance-based band selection is proposed where a quantitative evaluation metric is defined to evaluate and differentiate the performance of multispectral narrow-band images. This method is proved to be exceptionally robust to parameter changes. Furthermore, complexity-guided distance-based band selection is proposed using model selection criterion for an automatic selection. The performance of selected bands outperforms the conventional images by up to 15%. From the significant performance improvement via distance-based band selection and complexity-guided distance-based band selection, we prove that specific facial information carried in certain narrow-band spectral images can enhance face recognition performance compared to broad-band images. In addition, both algorithms are proved to be independent to recognition engines. Significant performance improvement is achieved by proposed image fusion and band selection algorithms under varying illumination including outdoor daylight conditions. Our proposed imaging system and image processing algorithms lead to a new avenue of automatic face recognition system towards a better recognition performance than the conventional peer system over varying illuminations

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Automatic Pipeline Surveillance Air-Vehicle

    Get PDF
    This thesis presents the developments of a vision-based system for aerial pipeline Right-of-Way surveillance using optical/Infrared sensors mounted on Unmanned Aerial Vehicles (UAV). The aim of research is to develop a highly automated, on-board system for detecting and following the pipelines; while simultaneously detecting any third-party interference. The proposed approach of using a UAV platform could potentially reduce the cost of monitoring and surveying pipelines when compared to manned aircraft. The main contributions of this thesis are the development of the image-analysis algorithms, the overall system architecture and validation of in hardware based on scaled down Test environment. To evaluate the performance of the system, the algorithms were coded using Python programming language. A small-scale test-rig of the pipeline structure, as well as expected third-party interference, was setup to simulate the operational environment and capture/record data for the algorithm testing and validation. The pipeline endpoints are identified by transforming the 16-bits depth data of the explored environment into 3D point clouds world coordinates. Then, using the Random Sample Consensus (RANSAC) approach, the foreground and background are separated based on the transformed 3D point cloud to extract the plane that corresponds to the ground. Simultaneously, the boundaries of the explored environment are detected based on the 16-bit depth data using a canny detector. Following that, these boundaries were filtered out, after being transformed into a 3D point cloud, based on the real height of the pipeline for fast and accurate measurements using a Euclidean distance of each boundary point, relative to the plane of the ground extracted previously. The filtered boundaries were used to detect the straight lines of the object boundary (Hough lines), once transformed into 16-bit depth data, using a Hough transform method. The pipeline is verified by estimating a centre line segment, using a 3D point cloud of each pair of the Hough line segments, (transformed into 3D). Then, the corresponding linearity of the pipeline points cloud is filtered within the width of the pipeline using Euclidean distance in the foreground point cloud. Then, the segment length of the detected centre line is enhanced to match the exact pipeline segment by extending it along the filtered point cloud of the pipeline. The third-party interference is detected based on four parameters, namely: foreground depth data; pipeline depth data; pipeline endpoints location in the 3D point cloud; and Right-of-Way distance. The techniques include detection, classification, and localization algorithms. Finally, a waypoints-based navigation system was implemented for the air- vehicle to fly over the course waypoints that were generated online by a heading angle demand to follow the pipeline structure in real-time based on the online identification of the pipeline endpoints relative to a camera frame

    Final Report for the ZERT Project: Basic Science of Retention Issues, Risk Assessment & Measurement, Monitoring and Verification for Geologic Sequestration

    Get PDF
    ZERT has made major contributions to five main areas of sequestration science: improvement of computational tools; measurement and monitoring techniques to verify storage and track migration of CO{sub 2}; development of a comprehensive performance and risk assessment framework; fundamental geophysical, geochemical and hydrological investigations of CO{sub 2} storage; and investigate innovative, bio-based mitigation strategies

    Quantitative electron microscopy for microstructural characterisation

    Get PDF
    Development of materials for high-performance applications requires accurate and useful analysis tools. In parallel with advances in electron microscopy hardware, we require analysis approaches to better understand microstructural behaviour. Such improvements in characterisation capability permit informed alloy design. New approaches to the characterisation of metallic materials are presented, primarily using signals collected from electron microscopy experiments. Electron backscatter diffraction is regularly used to investigate crystallography in the scanning electron microscope, and combined with energy-dispersive X-ray spectroscopy to simultaneusly investigate chemistry. New algorithms and analysis pipelines are developed to permit accurate and routine microstructural evaluation, leveraging a variety of machine learning approaches. This thesis investigates the structure and behaviour of Co/Ni-base superalloys, derived from V208C. Use of the presently developed techniques permits informed development of a new generation of advanced gas turbine engine materials.Open Acces

    Personality Identification from Social Media Using Deep Learning: A Review

    Get PDF
    Social media helps in sharing of ideas and information among people scattered around the world and thus helps in creating communities, groups, and virtual networks. Identification of personality is significant in many types of applications such as in detecting the mental state or character of a person, predicting job satisfaction, professional and personal relationship success, in recommendation systems. Personality is also an important factor to determine individual variation in thoughts, feelings, and conduct systems. According to the survey of Global social media research in 2018, approximately 3.196 billion social media users are in worldwide. The numbers are estimated to grow rapidly further with the use of mobile smart devices and advancement in technology. Support vector machine (SVM), Naive Bayes (NB), Multilayer perceptron neural network, and convolutional neural network (CNN) are some of the machine learning techniques used for personality identification in the literature review. This paper presents various studies conducted in identifying the personality of social media users with the help of machine learning approaches and the recent studies that targeted to predict the personality of online social media (OSM) users are reviewed

    Program and abstracts

    Get PDF
    Lunar and Planetary Institute, NASA's Mars Exploration Programconvener, Timothy Titus ; scientific organizing committee Timothy Titus ... [and others].PARTIAL CONTENTS: Modeling Aeolian Erosion Potential on Mars with the MRAMS LES -- Huygens Boundary Layer Data Explain the ~3 km Spacing of Titan's Dunes -- Active Dune Fields of the Navajo Nation, Southwestern United States -- Characteristic Time Scales of Dune-related Processes in Polar Regions of Mars -- Sand Composition of the Gran Desierto: A Terrestrial Analogue for Thermal Infrared Imaging and Spectroscopy Techniques

    Skin texture features for face recognition

    Get PDF
    Face recognition has been deployed in a wide range of important applications including surveillance and forensic identification. However, it still seems to be a challenging problem as its performance severely degrades under illumination, pose and expression variations, as well as with occlusions, and aging. In this thesis, we have investigated the use of local facial skin data as a source of biometric information to improve human recognition. Skin texture features have been exploited in three major tasks, which include (i) improving the performance of conventional face recognition systems, (ii) building an adaptive skin-based face recognition system, and (iii) dealing with circumstances when a full view of the face may not be avai'lable. Additionally, a fully automated scheme is presented for localizing eyes and mouth and segmenting four facial regions: forehead, right cheek, left cheek and chin. These four regions are divided into nonoverlapping patches with equal size. A novel skin/non-skin classifier is proposed for detecting patches containing only skin texture and therefore detecting the pure-skin regions. Experiments using the XM2VTS database indicate that the forehead region has the most significant biometric information. The use of forehead texture features improves the rank-l identification of Eigenfaces system from 77.63% to 84.07%. The rank-l identification is equal 93.56% when this region is fused with Kernel Direct Discriminant Analysis algorithm
    corecore