39 research outputs found

    Machine learning algorithms for monitoring pavement performance

    Get PDF
    ABSTRACT: This work introduces the need to develop competitive, low-cost and applicable technologies to real roads to detect the asphalt condition by means of Machine Learning (ML) algorithms. Specifically, the most recent studies are described according to the data collection methods: images, ground penetrating radar (GPR), laser and optic fiber. The main models that are presented for such state-of-the-art studies are Support Vector Machine, Random Forest, Naïve Bayes, Artificial neural networks or Convolutional Neural Networks. For these analyses, the methodology, type of problem, data source, computational resources, discussion and future research are highlighted. Open data sources, programming frameworks, model comparisons and data collection technologies are illustrated to allow the research community to initiate future investigation. There is indeed research on ML-based pavement evaluation but there is not a widely used applicability by pavement management entities yet, so it is mandatory to work on the refinement of models and data collection methods

    PARALLEL VISIBILITY AND FRESNEL-ZONES CALCULATION USING GRAPHICS PROCESSING UNITS

    Full text link
    Delo opisuje inovativno metodo izračuna vidnosti [61, 62] in Fresnelovih con na digitalnih zemljevidih z uporabo grafično procesnih kartic CUDA NVIDIA. Izdelani so trije vzporedni algoritmi: • modificiran vzporedni algoritem R2 za računanje vidnosti (R2-P), • algoritem za izračun zakrivanj Fresnelovih con (FZC), • algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom (FZTI). Na osnovi uveljavljenega sekvenčnega algoritma R2 za računanje vidnosti je razvit modificiran vzporedni algoritem R2-P, ki za pohitritev izračuna poleg večnitenja izkorišča še druge uporabne lastnosti grafične procesne enote. Združen dostop do globalnega pomnilnika pripomore k hitrejšemu pretoku podatkov in s tem k hitrejšemu izračunu. Izmenjava informacij med nitmi v času računanja igra ključno vlogo pri pohitritvi. Izračun vidnosti na poljubno velikih podatkih je omogočeno s segmentacijo digitalnega zemljevida. Modificiran vzporedni algoritem R2 je primerjan z že implementiranimi algoritmi za izračun vidnosti v smislu točnosti izračuna in časa izračuna. Izkaže se, da je novi algoritem enako točen kot že uveljavljeni sekvenčni algoritem R2, hkrati pa omogoča bistveno pohitritev izračuna. Čas izračuna je skrajšan iz reda nekaj minut na red nekaj sekund. To pa v praksi pomeni možnost interaktivnega dela. Pri načrtovanju radijskega pokrivanja je poleg vidnosti zelo uporaben podatek o zakrivanju Fresnelovih con. Pri algoritmu za izračun zakrivanj Fresnelovih con se izbere lokacijo radijskega oddajnika, višino oddajnika, opazovano višino sprejemnika nad terenom in valovno dolžino radijskega valovanja. Algoritem za vsako točko terena izračuna, katera Fresnelova cona je zakrita. Rezultat je digitalni zemljevid z izrisanimi območji zakrivanj Fresnelovih con, kar o radijskem signalu na terenu pove precej več kot izračun vidnosti. Predvsem na področjih, kjer je prva Fresnelova cona povsem zakrita, se v primerjavi z izračunom vidnosti pridobi v praksi zelo uporabna informacija. Algoritem ima tudi možnost upoštevanja rabe tal, kjer se višina terena poveča v odvisnosti od rabe tal (npr. za gozdno površino reda 15 m). Z modifikacijami, kot sta vpeljava Friisove enačbe in upoštevanje smernega diagrama anten, postane algoritem enostaven propagacijski model in tako primeren za izračun radijskega pokrivanja. Izračun radijskega signala se primerja z izmerjenimi vrednostmi na terenu za frekvence 90 Mhz (FM), 800 MHz (LTE) in 1800 MHz (LTE). Za različne vhodne parametre enostavnega propagacijskega modela se izračuna standardna deviacija sprememb med izmerjenimi in izračunanimi vrednostmi in se jih prikaže na grafih. Tako se pridobijo najbolj optimalne vrednosti vhodnih parametrov za vsako frekvenčno področje posebej. Algoritem za izračun prečnega preseka Fresnelovih con med oddajnikom in sprejemnikom izračuna sliko Fresnelovih con, ki predstavlja matematični presek vseh skaliranih prečnih presekov Fresnelovih con vzdolž radijske poti. Rezultat je vizualna slika, ki pokaže lastnosti radijske (linkovske) zveze v smislu zakritja posameznih Fresnelovih con. V praksi bi algoritem najbolj koristil pri načrtovanju radijskih linkov, kjer bi lahko preverili, koliko in kateri del Fresnelovih con manjka zaradi ovir (terena). Vsi trije algoritmi so implementirani kot moduli GRASS GIS in se lahko uporabljajo na vsakem osebnem računalniku, ki ima vgrajeno grafično procesno enoto CUDA NVIDIA in naloženo ustrezno prosto dostopno programsko opremo.The work describes an innovative method with which to calculate the visibility [61, 62] and Fresnel zones on digital maps using graphics processing NVIDIA CUDA cards. Three parallel algorithms were formulated: • modified R2 parallel algorithm for calculating visibility (R2-P), • algorithm for calculating Fresnel zone clearance (FZC), • algorithm for calculating Fresnel zone transverse intersection between the transmitter and the receiver (FZTI). The R2 parallel algorithm was developed based on the established R2 sequential algorithm for computing visibility. Aside from threading, other useful features of the graphics processing unit were used to speed up calculation time. Coalesced access to the global memory helps speed up the flow of information and thus also speeds up the calculation. Exchange of information between threads during computation plays a key role in the speedup. The segmentation of the digital map enables the calculation of visibility for huge data sets. The modified parallel R2 algorithm was compared with the already implemented algorithms for the viewshed calculation in term of accuracy and duration of the calculation. It turned out that the new algorithm R2-P had the same accuracy as the already established sequential algorithm R2, although the former also makes it possible to significantly speed up the calculation. Calculation time is reduced from the order of a few minutes to the order of a couple of seconds. This, in practice, means that there is a possibility of interactive work. In addition to the viewshed, Fresnel zone clearance is very useful for planning the radio coverage. Algorithm FZC starts with the location of the radio transmitter, the height of the transmitter, the receiver observation height above terrain, and the wavelength of the radio waves. The algorithm for each point of the terrain calculates the first clear Fresnel zone. The result is a digital map with the plotted areas of Fresnel zone clearance. This map provides better information about the radio signal than just a calculation of the viewshed. Indeed areas where the first Fresnel zone is completely obscured are particularly good for providing very useful information. The algorithm also has the ability to take into account land use, where the height of the terrain is raised as a function of land use (eg. For the forest area, raising can be 15 m). With modifications, such as the introduction of the Friis transmission equation and consideration of the radiation pattern, the algorithm becomes a simple radio propagation model and thus is suitable for the calculation of radio coverage. Calculation of the radio propagation is compared with the measured values on a field for frequencies of 90 MHz (FM), 800 MHz (LTE) and 1800 MHz (LTE). For a variety of input parameters, the standard deviation of changes between the field measurements and calculated propagation is presented in graphs. In this way, the optimal values of the input parameters for each frequency band can be obtained. The algorithm for calculating Fresnel zone transverse intersection between the transmitter and the receiver produces an image of Fresnel zones, which represents the mathematical section of all scale cross-sectional Fresnel zones along the transmission path. The result is a visual image that shows the characteristics of the radio link in terms of masking individual Fresnel zones. In practice, the algorithm is most useful in the design of radio links, where man can check how much and which part of the Fresnel zone is missing due to terrain obstacles. All three algorithms were implemented as GRASS GIS modules and can be used on any PC with an integrated GPU NVIDIA CUDA and loaded with the appropriate free-access software

    Synthetic Aperture Radar (SAR) Meets Deep Learning

    Get PDF
    This reprint focuses on the application of the combination of synthetic aperture radars and depth learning technology. It aims to further promote the development of SAR image intelligent interpretation technology. A synthetic aperture radar (SAR) is an important active microwave imaging sensor, whose all-day and all-weather working capacity give it an important place in the remote sensing community. Since the United States launched the first SAR satellite, SAR has received much attention in the remote sensing community, e.g., in geological exploration, topographic mapping, disaster forecast, and traffic monitoring. It is valuable and meaningful, therefore, to study SAR-based remote sensing applications. In recent years, deep learning represented by convolution neural networks has promoted significant progress in the computer vision community, e.g., in face recognition, the driverless field and Internet of things (IoT). Deep learning can enable computational models with multiple processing layers to learn data representations with multiple-level abstractions. This can greatly improve the performance of various applications. This reprint provides a platform for researchers to handle the above significant challenges and present their innovative and cutting-edge research results when applying deep learning to SAR in various manuscript types, e.g., articles, letters, reviews and technical reports

    Integrated Applications of Geo-Information in Environmental Monitoring

    Get PDF
    This book focuses on fundamental and applied research on geo-information technology, notably optical and radar remote sensing and algorithm improvements, and their applications in environmental monitoring. This Special Issue presents ten high-quality research papers covering up-to-date research in land cover change and desertification analyses, geo-disaster risk and damage evaluation, mining area restoration assessments, the improvement and development of algorithms, and coastal environmental monitoring and object targeting. The purpose of this Special Issue is to promote exchanges, communications and share the research outcomes of scientists worldwide and to bridge the gap between scientific research and its applications for advancing and improving society

    Special Topics in Information Technology

    Get PDF
    This open access book presents outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the best theses defended in 2021-22 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists

    Multi-source Remote Sensing for Forest Characterization and Monitoring

    Full text link
    As a dominant terrestrial ecosystem of the Earth, forest environments play profound roles in ecology, biodiversity, resource utilization, and management, which highlights the significance of forest characterization and monitoring. Some forest parameters can help track climate change and quantify the global carbon cycle and therefore attract growing attention from various research communities. Compared with traditional in-situ methods with expensive and time-consuming field works involved, airborne and spaceborne remote sensors collect cost-efficient and consistent observations at global or regional scales and have been proven to be an effective way for forest monitoring. With the looming paradigm shift toward data-intensive science and the development of remote sensors, remote sensing data with higher resolution and diversity have been the mainstream in data analysis and processing. However, significant heterogeneities in the multi-source remote sensing data largely restrain its forest applications urging the research community to come up with effective synergistic strategies. The work presented in this thesis contributes to the field by exploring the potential of the Synthetic Aperture Radar (SAR), SAR Polarimetry (PolSAR), SAR Interferometry (InSAR), Polarimetric SAR Interferometry (PolInSAR), Light Detection and Ranging (LiDAR), and multispectral remote sensing in forest characterization and monitoring from three main aspects including forest height estimation, active fire detection, and burned area mapping. First, the forest height inversion is demonstrated using airborne L-band dual-baseline repeat-pass PolInSAR data based on modified versions of the Random Motion over Ground (RMoG) model, where the scattering attenuation and wind-derived random motion are described in conditions of homogeneous and heterogeneous volume layer, respectively. A boreal and a tropical forest test site are involved in the experiment to explore the flexibility of different models over different forest types and based on that, a leveraging strategy is proposed to boost the accuracy of forest height estimation. The accuracy of the model-based forest height inversion is limited by the discrepancy between the theoretical models and actual scenarios and exhibits a strong dependency on the system and scenario parameters. Hence, high vertical accuracy LiDAR samples are employed to assist the PolInSAR-based forest height estimation. This multi-source forest height estimation is reformulated as a pan-sharpening task aiming to generate forest heights with high spatial resolution and vertical accuracy based on the synergy of the sparse LiDAR-derived heights and the information embedded in the PolInSAR data. This process is realized by a specifically designed generative adversarial network (GAN) allowing high accuracy forest height estimation less limited by theoretical models and system parameters. Related experiments are carried out over a boreal and a tropical forest to validate the flexibility of the method. An automated active fire detection framework is proposed for the medium resolution multispectral remote sensing data. The basic part of this framework is a deep-learning-based semantic segmentation model specifically designed for active fire detection. A dataset is constructed with open-access Sentinel-2 imagery for the training and testing of the deep-learning model. The developed framework allows an automated Sentinel-2 data download, processing, and generation of the active fire detection results through time and location information provided by the user. Related performance is evaluated in terms of detection accuracy and processing efficiency. The last part of this thesis explored whether the coarse burned area products can be further improved through the synergy of multispectral, SAR, and InSAR features with higher spatial resolutions. A Siamese Self-Attention (SSA) classification is proposed for the multi-sensor burned area mapping and a multi-source dataset is constructed at the object level for the training and testing. Results are analyzed by different test sites, feature sources, and classification methods to assess the improvements achieved by the proposed method. All developed methods are validated with extensive processing of multi-source data acquired by Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR), Land, Vegetation, and Ice Sensor (LVIS), PolSARproSim+, Sentinel-1, and Sentinel-2. I hope these studies constitute a substantial contribution to the forest applications of multi-source remote sensing

    Special Topics in Information Technology

    Get PDF
    This open access book presents outstanding doctoral dissertations in Information Technology from the Department of Electronics, Information and Bioengineering, Politecnico di Milano, Italy. Information Technology has always been highly interdisciplinary, as many aspects have to be considered in IT systems. The doctoral studies program in IT at Politecnico di Milano emphasizes this interdisciplinary nature, which is becoming more and more important in recent technological advances, in collaborative projects, and in the education of young researchers. Accordingly, the focus of advanced research is on pursuing a rigorous approach to specific research topics starting from a broad background in various areas of Information Technology, especially Computer Science and Engineering, Electronics, Systems and Control, and Telecommunications. Each year, more than 50 PhDs graduate from the program. This book gathers the outcomes of the best theses defended in 2021-22 and selected for the IT PhD Award. Each of the authors provides a chapter summarizing his/her findings, including an introduction, description of methods, main achievements and future work on the topic. Hence, the book provides a cutting-edge overview of the latest research trends in Information Technology at Politecnico di Milano, presented in an easy-to-read format that will also appeal to non-specialists

    Virtuaalse proovikabiini 3D kehakujude ja roboti juhtimisalgoritmide uurimine

    Get PDF
    Väitekirja elektrooniline versioon ei sisalda publikatsiooneVirtuaalne riiete proovimine on üks põhilistest teenustest, mille pakkumine võib suurendada rõivapoodide edukust, sest tänu sellele lahendusele väheneb füüsilise töö vajadus proovimise faasis ning riiete proovimine muutub kasutaja jaoks mugavamaks. Samas pole enamikel varem välja pakutud masinnägemise ja graafika meetoditel õnnestunud inimkeha realistlik modelleerimine, eriti terve keha 3D modelleerimine, mis vajab suurt kogust andmeid ja palju arvutuslikku ressurssi. Varasemad katsed on ebaõnnestunud põhiliselt seetõttu, et ei ole suudetud korralikult arvesse võtta samaaegseid muutusi keha pinnal. Lisaks pole varasemad meetodid enamasti suutnud kujutiste liikumisi realistlikult reaalajas visualiseerida. Käesolev projekt kavatseb kõrvaldada eelmainitud puudused nii, et rahuldada virtuaalse proovikabiini vajadusi. Välja pakutud meetod seisneb nii kasutaja keha kui ka riiete skaneerimises, analüüsimises, modelleerimises, mõõtmete arvutamises, orientiiride paigutamises, mannekeenidelt võetud 3D visuaalsete andmete segmenteerimises ning riiete mudeli paigutamises ja visualiseerimises kasutaja kehal. Selle projekti käigus koguti visuaalseid andmeid kasutades 3D laserskannerit ja Kinecti optilist kaamerat ning koostati nendest andmebaas. Neid andmeid kasutati välja töötatud algoritmide testimiseks, mis peamiselt tegelevad riiete realistliku visuaalse kujutamisega inimkehal ja suuruse pakkumise süsteemi täiendamisega virtuaalse proovikabiini kontekstis.Virtual fitting constitutes a fundamental element of the developments expected to rise the commercial prosperity of online garment retailers to a new level, as it is expected to reduce the load of the manual labor and physical efforts required. Nevertheless, most of the previously proposed computer vision and graphics methods have failed to accurately and realistically model the human body, especially, when it comes to the 3D modeling of the whole human body. The failure is largely related to the huge data and calculations required, which in reality is caused mainly by inability to properly account for the simultaneous variations in the body surface. In addition, most of the foregoing techniques cannot render realistic movement representations in real-time. This project intends to overcome the aforementioned shortcomings so as to satisfy the requirements of a virtual fitting room. The proposed methodology consists in scanning and performing some specific analyses of both the user's body and the prospective garment to be virtually fitted, modeling, extracting measurements and assigning reference points on them, and segmenting the 3D visual data imported from the mannequins. Finally, superimposing, adopting and depicting the resulting garment model on the user's body. The project is intended to gather sufficient amounts of visual data using a 3D laser scanner and the Kinect optical camera, to manage it in form of a usable database, in order to experimentally implement the algorithms devised. The latter will provide a realistic visual representation of the garment on the body, and enhance the size-advisor system in the context of the virtual fitting room under study

    Three-Dimensional Reconstruction and Modeling Using Low-Precision Vision Sensors for Automation and Robotics Applications in Construction

    Full text link
    Automation and robotics in construction (ARC) has the potential to assist in the performance of several mundane, repetitive, or dangerous construction tasks autonomously or under the supervision of human workers, and perform effective site and resource monitoring to stimulate productivity growth and facilitate safety management. When using ARC technologies, three-dimensional (3D) reconstruction is a primary requirement for perceiving and modeling the environment to generate 3D workplace models for various applications. Previous work in ARC has predominantly utilized 3D data captured from high-fidelity and expensive laser scanners for data collection and processing while paying little attention of 3D reconstruction and modeling using low-precision vision sensors, particularly for indoor ARC applications. This dissertation explores 3D reconstruction and modeling for ARC applications using low-precision vision sensors for both outdoor and indoor applications. First, to handle occlusion for cluttered environments, a joint point cloud completion and surface relation inference framework using red-green-blue and depth (RGB-D) sensors (e.g., Microsoft® Kinect) is proposed to obtain complete 3D models and the surface relations. Then, to explore the integration of prior domain knowledge, a user-guided dimensional analysis method using RGB-D sensors is designed to interactively obtain dimensional information for indoor building environments. In order to allow deployed ARC systems to be aware of or monitor humans in the environment, a real-time human tracking method using a single RGB-D sensor is designed to track specific individuals under various illumination conditions in work environments. Finally, this research also investigates the utilization of aerially collected video images for modeling ongoing excavations and automated geotechnical hazards detection and monitoring. The efficacy of the researched methods has been evaluated and validated through several experiments. Specifically, the joint point cloud completion and surface relation inference method is demonstrated to be able to recover all surface connectivity relations, double the point cloud size by adding points of which more than 87% are correct, and thus create high-quality complete 3D models of the work environment. The user-guided dimensional analysis method can provide legitimate user guidance for obtaining dimensions of interest. The average relative errors for the example scenes are less than 7% while the absolute errors less than 36mm. The designed human worker tracking method can successfully track a specific individual in real-time with high detection accuracy. The excavation slope stability monitoring framework allows convenient data collection and efficient data processing for real-time job site monitoring. The designed geotechnical hazard detection and mapping methods enable automated identification of landslides using only aerial video images collected using drones.PHDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/138626/1/yongxiao_1.pd

    Advances in Remote Sensing-based Disaster Monitoring and Assessment

    Get PDF
    Remote sensing data and techniques have been widely used for disaster monitoring and assessment. In particular, recent advances in sensor technologies and artificial intelligence-based modeling are very promising for disaster monitoring and readying responses aimed at reducing the damage caused by disasters. This book contains eleven scientific papers that have studied novel approaches applied to a range of natural disasters such as forest fire, urban land subsidence, flood, and tropical cyclones
    corecore