6,104 research outputs found

    Geodetic monitoring of complex shaped infrastructures using Ground-Based InSAR

    Get PDF
    In the context of climate change, alternatives to fossil energies need to be used as much as possible to produce electricity. Hydroelectric power generation through the utilisation of dams stands out as an exemplar of highly effective methodologies in this endeavour. Various monitoring sensors can be installed with different characteristics w.r.t. spatial resolution, temporal resolution and accuracy to assess their safe usage. Among the array of techniques available, it is noteworthy that ground-based synthetic aperture radar (GB-SAR) has not yet been widely adopted for this purpose. Despite its remarkable equilibrium between the aforementioned attributes, its sensitivity to atmospheric disruptions, specific acquisition geometry, and the requisite for phase unwrapping collectively contribute to constraining its usage. Several processing strategies are developed in this thesis to capitalise on all the opportunities of GB-SAR systems, such as continuous, flexible and autonomous observation combined with high resolutions and accuracy. The first challenge that needs to be solved is to accurately localise and estimate the azimuth of the GB-SAR to improve the geocoding of the image in the subsequent step. A ray tracing algorithm and tomographic techniques are used to recover these external parameters of the sensors. The introduction of corner reflectors for validation purposes confirms a significant error reduction. However, for the subsequent geocoding, challenges persist in scenarios involving vertical structures due to foreshortening and layover, which notably compromise the geocoding quality of the observed points. These issues arise when multiple points at varying elevations are encapsulated within a singular resolution cell, posing difficulties in pinpointing the precise location of the scattering point responsible for signal return. To surmount these hurdles, a Bayesian approach grounded in intensity models is formulated, offering a tool to enhance the accuracy of the geocoding process. The validation is assessed on a dam in the black forest in Germany, characterised by a very specific structure. The second part of this thesis is focused on the feasibility of using GB-SAR systems for long-term geodetic monitoring of large structures. A first assessment is made by testing large temporal baselines between acquisitions for epoch-wise monitoring. Due to large displacements, the phase unwrapping can not recover all the information. An improvement is made by adapting the geometry of the signal processing with the principal component analysis. The main case study consists of several campaigns from different stations at Enguri Dam in Georgia. The consistency of the estimated displacement map is assessed by comparing it to a numerical model calibrated on the plumblines data. It exhibits a strong agreement between the two results and comforts the usage of GB-SAR for epoch-wise monitoring, as it can measure several thousand points on the dam. It also exhibits the possibility of detecting local anomalies in the numerical model. Finally, the instrument has been installed for continuous monitoring for over two years at Enguri Dam. An adequate flowchart is developed to eliminate the drift happening with classical interferometric algorithms to achieve the accuracy required for geodetic monitoring. The analysis of the obtained time series confirms a very plausible result with classical parametric models of dam deformations. Moreover, the results of this processing strategy are also confronted with the numerical model and demonstrate a high consistency. The final comforting result is the comparison of the GB-SAR time series with the output from four GNSS stations installed on the dam crest. The developed algorithms and methods increase the capabilities of the GB-SAR for dam monitoring in different configurations. It can be a valuable and precious supplement to other classical sensors for long-term geodetic observation purposes as well as short-term monitoring in cases of particular dam operations

    Land use classification in mine-agriculture compound area based on multi-feature random forest: a case study of Peixian

    Get PDF
    IntroductionLand use classification plays a critical role in analyzing land use/cover change (LUCC). Remote sensing land use classification based on machine learning algorithm is one of the hot spots in current remote sensing technology research. The diversity of surface objects and the complexity of their distribution in mixed mining and agricultural areas have brought challenges to the classification of traditional remote sensing images, and the rich information contained in remote sensing images has not been fully utilized.MethodsA quantitative difference index was proposed quantify and select the texture features of easily confused land types, and a random forest (RF) classification method with multi-feature combination classification schemes for remote sensing images was developed, and land use information of the mine-agriculture compound area of Peixian in Xuzhou, China was extracted.ResultsThe quantitative difference index proved effective in reducing the dimensionality of feature parameters and resulted in a reduction of the optimal feature scheme dimension from 57 to 22. Among the four classification methods based on the optimal feature classification scheme, the RF algorithm emerged as the most efficient with a classification accuracy of 92.38% and a Kappa coefficient of 0.90, which outperformed the support vector machine (SVM), classification and regression tree (CART), and neural network (NN) algorithm.ConclusionThe findings indicate that the quantitative differential index is a novel and effective approach for discerning distinct texture features among various land types. It plays a crucial role in the selection and optimization of texture features in multispectral remote sensing imagery. Random forest (RF) classification method, leveraging a multi-feature combination, provides a fresh method support for the precise classification of intricate ground objects within the mine-agriculture compound area

    Pre-Deployment Testing of Low Speed, Urban Road Autonomous Driving in a Simulated Environment

    Full text link
    Low speed autonomous shuttles emulating SAE Level L4 automated driving using human driver assisted autonomy have been operating in geo-fenced areas in several cities in the US and the rest of the world. These autonomous vehicles (AV) are operated by small to mid-sized technology companies that do not have the resources of automotive OEMs for carrying out exhaustive, comprehensive testing of their AV technology solutions before public road deployment. Due to the low speed of operation and hence not operating on roads containing highways, the base vehicles of these AV shuttles are not required to go through rigorous certification tests. The way the driver assisted AV technology is tested and allowed for public road deployment is continuously evolving but is not standardized and shows differences between the different states where these vehicles operate. Currently, AVs and AV shuttles deployed on public roads are using these deployments for testing and improving their technology. However, this is not the right approach. Safe and extensive testing in a lab and controlled test environment including Model-in-the-Loop (MiL), Hardware-in-the-Loop (HiL) and Autonomous-Vehicle-in-the-Loop (AViL) testing should be the prerequisite to such public road deployments. This paper presents three dimensional virtual modeling of an AV shuttle deployment site and simulation testing in this virtual environment. We have two deployment sites in Columbus of these AV shuttles through the Department of Transportation funded Smart City Challenge project named Smart Columbus. The Linden residential area AV shuttle deployment site of Smart Columbus is used as the specific example for illustrating the AV testing method proposed in this paper

    Exploring the effects of robotic design on learning and neural control

    Full text link
    The ongoing deep learning revolution has allowed computers to outclass humans in various games and perceive features imperceptible to humans during classification tasks. Current machine learning techniques have clearly distinguished themselves in specialized tasks. However, we have yet to see robots capable of performing multiple tasks at an expert level. Most work in this field is focused on the development of more sophisticated learning algorithms for a robot's controller given a largely static and presupposed robotic design. By focusing on the development of robotic bodies, rather than neural controllers, I have discovered that robots can be designed such that they overcome many of the current pitfalls encountered by neural controllers in multitask settings. Through this discovery, I also present novel metrics to explicitly measure the learning ability of a robotic design and its resistance to common problems such as catastrophic interference. Traditionally, the physical robot design requires human engineers to plan every aspect of the system, which is expensive and often relies on human intuition. In contrast, within the field of evolutionary robotics, evolutionary algorithms are used to automatically create optimized designs, however, such designs are often still limited in their ability to perform in a multitask setting. The metrics created and presented here give a novel path to automated design that allow evolved robots to synergize with their controller to improve the computational efficiency of their learning while overcoming catastrophic interference. Overall, this dissertation intimates the ability to automatically design robots that are more general purpose than current robots and that can perform various tasks while requiring less computation.Comment: arXiv admin note: text overlap with arXiv:2008.0639

    Computer Vision-Based Hand Tracking and 3D Reconstruction as a Human-Computer Input Modality with Clinical Application

    Get PDF
    The recent pandemic has impeded patients with hand injuries from connecting in person with their therapists. To address this challenge and improve hand telerehabilitation, we propose two computer vision-based technologies, photogrammetry and augmented reality as alternative and affordable solutions for visualization and remote monitoring of hand trauma without costly equipment. In this thesis, we extend the application of 3D rendering and virtual reality-based user interface to hand therapy. We compare the performance of four popular photogrammetry software in reconstructing a 3D model of a synthetic human hand from videos captured through a smartphone. The visual quality, reconstruction time and geometric accuracy of output model meshes are compared. Reality Capture produces the best result, with output mesh having the least error of 1mm and a total reconstruction time of 15 minutes. We developed an augmented reality app using MediaPipe algorithms that extract hand key points, finger joint coordinates and angles in real-time from hand images or live stream media. We conducted a study to investigate its input variability and validity as a reliable tool for remote assessment of finger range of motion. The intraclass correlation coefficient between DIGITS and in-person measurement obtained is 0.767- 0.81 for finger extension and 0.958–0.857 for finger flexion. Finally, we develop and surveyed the usability of a mobile application that collects patient data medical history, self-reported pain levels and hand 3D models and transfer them to therapists. These technologies can improve hand telerehabilitation, aid clinicians in monitoring hand conditions remotely and make decisions on appropriate therapy, medication, and hand orthoses

    Masked Discriminators for Content-Consistent Unpaired Image-to-Image Translation

    Full text link
    A common goal of unpaired image-to-image translation is to preserve content consistency between source images and translated images while mimicking the style of the target domain. Due to biases between the datasets of both domains, many methods suffer from inconsistencies caused by the translation process. Most approaches introduced to mitigate these inconsistencies do not constrain the discriminator, leading to an even more ill-posed training setup. Moreover, none of these approaches is designed for larger crop sizes. In this work, we show that masking the inputs of a global discriminator for both domains with a content-based mask is sufficient to reduce content inconsistencies significantly. However, this strategy leads to artifacts that can be traced back to the masking process. To reduce these artifacts, we introduce a local discriminator that operates on pairs of small crops selected with a similarity sampling strategy. Furthermore, we apply this sampling strategy to sample global input crops from the source and target dataset. In addition, we propose feature-attentive denormalization to selectively incorporate content-based statistics into the generator stream. In our experiments, we show that our method achieves state-of-the-art performance in photorealistic sim-to-real translation and weather translation and also performs well in day-to-night translation. Additionally, we propose the cKVD metric, which builds on the sKVD metric and enables the examination of translation quality at the class or category level.Comment: 24 pages, 22 figures, under revie

    Mortar Characterization of Historical Masonry Damaged by Riverbank Failure: The Case of Lungarno Torrigiani (Florence)

    Get PDF
    The research of structural masonry associated with geo-hydrological hazards in Cultural Heritage is a multidisciplinary issue, requiring consideration of several aspects including the characterization of used materials. On 25 May 2016, loss of water from the subterranean pipes and of the aqueduct caused an Arno riverbank failure damaging a 100 m long portion of the historical embankment wall of Lungarno Torrigiani in Florence. The historical masonry was built from 1854–1855 by Giuseppe Poggi and represents a historical example of an engineering approach to riverbank construction, composed of a scarp massive wall on foundation piles, with a rubble masonry internal core. The failure event caused only a cusp-shaped deformation to the wall without any shattering or toppling. A complete characterization of the mortars was performed to identify the technologies, raw materials and state of conservation in order to understand why the wall has not collapsed. Indeed, the mortars utilized influenced the structural behavior of masonry, and their characterization was fundamental to improve the knowledge of mechanical properties of civil architectural heritage walls. Therefore, the aim of this research was to analyze the mortars from mineralogical–petrographic, physical and mechanical points of view, to evaluate the contribution of the materials to damage events. Moreover, the results of this study helped to identify compatible project solutions for the installation of hydraulically and statically functional structures to contain the riverbank

    Using Satellite-Derived Fire Arrival Times for Coupled Wildfire-Air Quality Simulations at Regional Scales of the 2020 California Wildfire Season

    Get PDF
    Wildfire frequency has increased in the Western US over recent decades, driven by climate change and a legacy of forest management practices. Consequently, human structures, health, and life are increasingly at risk due to wildfires. Furthermore, wildfire smoke presents a growing hazard for regional and national air quality. In response, many scientific tools have been developed to study and forecast wildfire behavior, or test interventions that may mitigate risk. In this study, we present a retrospective analysis of 1 month of the 2020 Northern California wildfire season, when many wildfires with varying environments and behavior impacted regional air quality. We simulated this period using a coupled numerical weather prediction model with online atmospheric chemistry, and compare two approaches to representing smoke emissions: an online fire spread model driven by remotely sensed fire arrival times and a biomass burning emissions inventory. First, we quantify the differences in smoke emissions and timing of fire activity, and characterize the subsequent impact on estimates of smoke emissions. Next, we compare the simulated smoke to surface observations and remotely sensed smoke; we find that despite differences in the simulated smoke surface concentrations, the two models achieve similar levels of accuracy. We present a detailed comparison between the performance and relative strengths of both approaches, and discuss potential refinements that could further improve future simulations of wildfire smoke. Finally, we characterize the interactions between smoke and meteorology during this event, and discuss the implications that increases in regional smoke may have on future meteorological conditions

    Webbasert e-turisme applikasjon med værvisualisering

    Get PDF
    Denne oppgaven er i samarbeid med Agder XR og Lindesnes fyrmuseum. Lindesnes fyrmuseum har hatt et ønske om å nå ut til flere folk, primært de som ikke kan besøke de, og slik vekke interesse for destinasjonen. Her ble det dermed skapt en e-turisme løsning. Dette ble gjort gjennom bruk av en visualisering applikasjon, som gjenga været på Lindesnes. Det ble satt en begrensning til produktet, og fokus skulle først og fremst være å kunne vise noen få værtyper. Det har med dette blitt utviklet to varianter av prototypen. For begge var det relevant å ha et fokus på et Human-Centered Design ved å ta i bruk Design Thinking. Den ene tar i bruk spillteknologi for å visualisere, gjennom spillmotoren Unity. Den andre tar i bruk rendrede 360 graders video, som spilles gjennom ved bruk av en webløsning, som tar basis i et eksempel fra three.js. Disse har blitt utviklet for å kunne sammenligne hvilken variant som faktisk lar seg lettest utvikle for web, og for å se hvilken brukerne ville foretrekke. Unity versjonen ga flere valg, mens 360 video versjonen var mer begrenset. En av de andre store forskjellene var også den visuelle kvaliteten. Det har blitt utført metodetriangulering, og gjennom Usability testing har det blitt tatt i bruk observasjoner, intervjuer og spørreundersøkelser for å undersøke den fastsatte problemstillingen. Dette har blitt gjort for å finne ut om de utviklede prototypene var brukervennlige, og om det når opp til hva en bruker forventer å se i en e-turisme-applikasjon. Begge prototypene viste seg at de ikke er fullkomne e-turisme løsninger. De vekker ikke nok interesse for videre bruk, da det kun vises vær. Dermed kommer det tydelig frem at løsningene burde inneholde flere muligheter, hovedsakelig muligheten til å la brukeren planlegge reisen sin. Dette vil kunne gjøres ved å gi mer informasjon om plassen og nærliggende attraksjoner. Brukeren burde også få mulighet til å visuelt utforske området mer. Slik kan det også bedre tilrettelegge for de som ikke har mulighet til å komme, da de vil få en mer utfylt opplevelse
    corecore