11,297 research outputs found

    Robust and Flexible Persistent Scatterer Interferometry for Long-Term and Large-Scale Displacement Monitoring

    Get PDF
    Die Persistent Scatterer Interferometrie (PSI) ist eine Methode zur Überwachung von Verschiebungen der ErdoberflĂ€che aus dem Weltraum. Sie basiert auf der Identifizierung und Analyse von stabilen Punktstreuern (sog. Persistent Scatterer, PS) durch die Anwendung von AnsĂ€tzen der Zeitreihenanalyse auf Stapel von SAR-Interferogrammen. PS Punkte dominieren die RĂŒckstreuung der Auflösungszellen, in denen sie sich befinden, und werden durch geringfĂŒgige Dekorrelation charakterisiert. Verschiebungen solcher PS Punkte können mit einer potenziellen Submillimetergenauigkeit ĂŒberwacht werden, wenn Störquellen effektiv minimiert werden. Im Laufe der Zeit hat sich die PSI in bestimmten Anwendungen zu einer operationellen Technologie entwickelt. Es gibt jedoch immer noch herausfordernde Anwendungen fĂŒr die Methode. Physische VerĂ€nderungen der LandoberflĂ€che und Änderungen in der Aufnahmegeometrie können dazu fĂŒhren, dass PS Punkte im Laufe der Zeit erscheinen oder verschwinden. Die Anzahl der kontinuierlich kohĂ€renten PS Punkte nimmt mit zunehmender LĂ€nge der Zeitreihen ab, wĂ€hrend die Anzahl der TPS Punkte zunimmt, die nur wĂ€hrend eines oder mehrerer getrennter Segmente der analysierten Zeitreihe kohĂ€rent sind. Daher ist es wĂŒnschenswert, die Analyse solcher TPS Punkte in die PSI zu integrieren, um ein flexibles PSI-System zu entwickeln, das in der Lage ist mit dynamischen VerĂ€nderungen der LandoberflĂ€che umzugehen und somit ein kontinuierliches Verschiebungsmonitoring ermöglicht. Eine weitere Herausforderung der PSI besteht darin, großflĂ€chiges Monitoring in Regionen mit komplexen atmosphĂ€rischen Bedingungen durchzufĂŒhren. Letztere fĂŒhren zu hoher Unsicherheit in den Verschiebungszeitreihen bei großen AbstĂ€nden zur rĂ€umlichen Referenz. Diese Arbeit befasst sich mit Modifikationen und Erweiterungen, die auf der Grund lage eines bestehenden PSI-Algorithmus realisiert wurden, um einen robusten und flexiblen PSI-Ansatz zu entwickeln, der mit den oben genannten Herausforderungen umgehen kann. Als erster Hauptbeitrag wird eine Methode prĂ€sentiert, die TPS Punkte vollstĂ€ndig in die PSI integriert. In Evaluierungsstudien mit echten SAR Daten wird gezeigt, dass die Integration von TPS Punkten tatsĂ€chlich die BewĂ€ltigung dynamischer VerĂ€nderungen der LandoberflĂ€che ermöglicht und mit zunehmender ZeitreihenlĂ€nge zunehmende Relevanz fĂŒr PSI-basierte Beobachtungsnetzwerke hat. Der zweite Hauptbeitrag ist die Vorstellung einer Methode zur kovarianzbasierten Referenzintegration in großflĂ€chige PSI-Anwendungen zur SchĂ€tzung von rĂ€umlich korreliertem Rauschen. Die Methode basiert auf der Abtastung des Rauschens an Referenzpixeln mit bekannten Verschiebungszeitreihen und anschließender Interpolation auf die restlichen PS Pixel unter BerĂŒcksichtigung der rĂ€umlichen Statistik des Rauschens. Es wird in einer Simulationsstudie sowie einer Studie mit realen Daten gezeigt, dass die Methode ĂŒberlegene Leistung im Vergleich zu alternativen Methoden zur Reduktion von rĂ€umlich korreliertem Rauschen in Interferogrammen mittels Referenzintegration zeigt. Die entwickelte PSI-Methode wird schließlich zur Untersuchung von Landsenkung im Vietnamesischen Teil des Mekong Deltas eingesetzt, das seit einigen Jahrzehnten von Landsenkung und verschiedenen anderen Umweltproblemen betroffen ist. Die geschĂ€tzten Landsenkungsraten zeigen eine hohe VariabilitĂ€t auf kurzen sowie großen rĂ€umlichen Skalen. Die höchsten Senkungsraten von bis zu 6 cm pro Jahr treten hauptsĂ€chlich in stĂ€dtischen Gebieten auf. Es kann gezeigt werden, dass der grĂ¶ĂŸte Teil der Landsenkung ihren Ursprung im oberflĂ€chennahen Untergrund hat. Die prĂ€sentierte Methode zur Reduzierung von rĂ€umlich korreliertem Rauschen verbessert die Ergebnisse signifikant, wenn eine angemessene rĂ€umliche Verteilung von Referenzgebieten verfĂŒgbar ist. In diesem Fall wird das Rauschen effektiv reduziert und unabhĂ€ngige Ergebnisse von zwei Interferogrammstapeln, die aus unterschiedlichen Orbits aufgenommen wurden, zeigen große Übereinstimmung. Die Integration von TPS Punkten fĂŒhrt fĂŒr die analysierte Zeitreihe von sechs Jahren zu einer deutlich grĂ¶ĂŸeren Anzahl an identifizierten TPS als PS Punkten im gesamten Untersuchungsgebiet und verbessert damit das Beobachtungsnetzwerk erheblich. Ein spezieller Anwendungsfall der TPS Integration wird vorgestellt, der auf der Clusterung von TPS Punkten basiert, die innerhalb der analysierten Zeitreihe erschienen, um neue Konstruktionen systematisch zu identifizieren und ihre anfĂ€ngliche Bewegungszeitreihen zu analysieren

    Deep generative models for network data synthesis and monitoring

    Get PDF
    Measurement and monitoring are fundamental tasks in all networks, enabling the down-stream management and optimization of the network. Although networks inherently have abundant amounts of monitoring data, its access and effective measurement is another story. The challenges exist in many aspects. First, the inaccessibility of network monitoring data for external users, and it is hard to provide a high-fidelity dataset without leaking commercial sensitive information. Second, it could be very expensive to carry out effective data collection to cover a large-scale network system, considering the size of network growing, i.e., cell number of radio network and the number of flows in the Internet Service Provider (ISP) network. Third, it is difficult to ensure fidelity and efficiency simultaneously in network monitoring, as the available resources in the network element that can be applied to support the measurement function are too limited to implement sophisticated mechanisms. Finally, understanding and explaining the behavior of the network becomes challenging due to its size and complex structure. Various emerging optimization-based solutions (e.g., compressive sensing) or data-driven solutions (e.g. deep learning) have been proposed for the aforementioned challenges. However, the fidelity and efficiency of existing methods cannot yet meet the current network requirements. The contributions made in this thesis significantly advance the state of the art in the domain of network measurement and monitoring techniques. Overall, we leverage cutting-edge machine learning technology, deep generative modeling, throughout the entire thesis. First, we design and realize APPSHOT , an efficient city-scale network traffic sharing with a conditional generative model, which only requires open-source contextual data during inference (e.g., land use information and population distribution). Second, we develop an efficient drive testing system — GENDT, based on generative model, which combines graph neural networks, conditional generation, and quantified model uncertainty to enhance the efficiency of mobile drive testing. Third, we design and implement DISTILGAN, a high-fidelity, efficient, versatile, and real-time network telemetry system with latent GANs and spectral-temporal networks. Finally, we propose SPOTLIGHT , an accurate, explainable, and efficient anomaly detection system of the Open RAN (Radio Access Network) system. The lessons learned through this research are summarized, and interesting topics are discussed for future work in this domain. All proposed solutions have been evaluated with real-world datasets and applied to support different applications in real systems

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Neuromodulatory effects on early visual signal processing

    Get PDF
    Understanding how the brain processes information and generates simple to complex behavior constitutes one of the core objectives in systems neuroscience. However, when studying different neural circuits, their dynamics and interactions researchers often assume fixed connectivity, overlooking a crucial factor - the effect of neuromodulators. Neuromodulators can modulate circuit activity depending on several aspects, such as different brain states or sensory contexts. Therefore, considering the modulatory effects of neuromodulators on the functionality of neural circuits is an indispensable step towards a more complete picture of the brain’s ability to process information. Generally, this issue affects all neural systems; hence this thesis tries to address this with an experimental and computational approach to resolve neuromodulatory effects on cell type-level in a well-define system, the mouse retina. In the first study, we established and applied a machine-learning-based classification algorithm to identify individual functional retinal ganglion cell types, which enabled detailed cell type-resolved analyses. We applied the classifier to newly acquired data of light-evoked retinal ganglion cell responses and successfully identified their functional types. Here, the cell type-resolved analysis revealed that a particular principle of efficient coding applies to all types in a similar way. In a second study, we focused on the issue of inter-experimental variability that can occur during the process of pooling datasets. As a result, further downstream analyses may be complicated by the subtle variations between the individual datasets. To tackle this, we proposed a theoretical framework based on an adversarial autoencoder with the objective to remove inter-experimental variability from the pooled dataset, while preserving the underlying biological signal of interest. In the last study of this thesis, we investigated the functional effects of the neuromodulator nitric oxide on the retinal output signal. To this end, we used our previously developed retinal ganglion cell type classifier to unravel type-specific effects and established a paired recording protocol to account for type-specific time-dependent effects. We found that certain retinal ganglion cell types showed adaptational type-specific changes and that nitric oxide had a distinct modulation of a particular group of retinal ganglion cells. In summary, I first present several experimental and computational methods that allow to study functional neuromodulatory effects on the retinal output signal in a cell type-resolved manner and, second, use these tools to demonstrate their feasibility to study the neuromodulator nitric oxide

    Beyond correlation: optimal transport metrics for characterizing representational stability and remapping in neurons encoding spatial memory

    Get PDF
    IntroductionSpatial representations in the entorhinal cortex (EC) and hippocampus (HPC) are fundamental to cognitive functions like navigation and memory. These representations, embodied in spatial field maps, dynamically remap in response to environmental changes. However, current methods, such as Pearson's correlation coefficient, struggle to capture the complexity of these remapping events, especially when fields do not overlap, or transformations are non-linear. This limitation hinders our understanding and quantification of remapping, a key aspect of spatial memory function.MethodsWe propose a family of metrics based on the Earth Mover's Distance (EMD) as a versatile framework for characterizing remapping.ResultsThe EMD provides a granular, noise-resistant, and rate-robust description of remapping. This approach enables the identification of specific cell types and the characterization of remapping in various scenarios, including disease models. Furthermore, the EMD's properties can be manipulated to identify spatially tuned cell types and to explore remapping as it relates to alternate information forms such as spatiotemporal coding.DiscussionWe present a feasible, lightweight approach that complements traditional methods. Our findings underscore the potential of the EMD as a powerful tool for enhancing our understanding of remapping in the brain and its implications for spatial navigation, memory studies and beyond

    Deep learning model based on multi-scale feature fusion for precipitation nowcasting

    Get PDF
    Forecasting heavy precipitation accurately is a challenging task for most deep learning (DL)-based models. To address this, we present a novel DL architecture called “multi-scale feature fusion” (MFF) that can forecast precipitation with a lead time of up to 3 h. The MFF model uses convolution kernels with varying sizes to create multi-scale receptive fields. This helps to capture the movement features of precipitation systems, such as their shape, movement direction, and speed. Additionally, the architecture utilizes the mechanism of discrete probability to reduce uncertainties and forecast errors, enabling it to predict heavy precipitation even at longer lead times. For model training, we use 4 years of radar echo data from 2018 to 2021 and 1 year of data from 2022 for model testing. We compare the MFF model with three existing extrapolative models: time series residual convolution (TSRC), optical flow (OF), and UNet. The results show that MFF achieves superior forecast skills with high probability of detection (POD), low false alarm rate (FAR), small mean absolute error (MAE), and high structural similarity index (SSIM). Notably, MFF can predict high-intensity precipitation fields at 3 h lead time, while the other three models cannot. Furthermore, MFF shows improvement in the smoothing effect of the forecast field, as observed from the results of radially averaged power spectral (RAPS). Our future work will focus on incorporating multi-source meteorological variables, making structural adjustments to the network, and combining them with numerical models to further improve the forecast skills of heavy precipitations at longer lead times.</p

    Content-aware frame interpolation (CAFI): deep learning-based temporal super-resolution for fast bioimaging

    Get PDF
    The development of high-resolution microscopes has made it possible to investigate cellular processes in 3D and over time. However, observing fast cellular dynamics remains challenging because of photobleaching and phototoxicity. Here we report the implementation of two content-aware frame interpolation (CAFI) deep learning networks, Zooming SlowMo and Depth-Aware Video Frame Interpolation, that are highly suited for accurately predicting images in between image pairs, therefore improving the temporal resolution of image series post-acquisition. We show that CAFI is capable of understanding the motion context of biological structures and can perform better than standard interpolation methods. We benchmark CAFI’s performance on 12 different datasets, obtained from four different microscopy modalities, and demonstrate its capabilities for single-particle tracking and nuclear segmentation. CAFI potentially allows for reduced light exposure and phototoxicity on the sample for improved long-term live-cell imaging. The models and the training and testing data are available via the ZeroCostDL4Mic platform

    Development of an integrated model framework for multi-air-pollutant exposure assessments in high-density cities

    Get PDF
    Exposure models for some criteria of air pollutants have been intensively developed in past research; multi-air-pollutant exposure models, especially for particulate chemical species, have been however overlooked in Asia. Lack of an integrated model framework to calculate multi-air-pollutant exposure has hindered the combined exposure assessment and the corresponding health assessment. This work applied the land-use regression (LUR) approach to develop an integrated model framework to estimate 2017 annual-average exposure of multiple air pollutants in a typical high-rise and high-density Asian city (Hong Kong, China) including four criteria of gaseous air pollutants (particulate matter with an aerodynamic diameter equal to or less than 10 ”m (PM10) and 2.5 ”m (PM2.5), nitrogen dioxide (NO2), and ozone (O3)), as well as four major PM10 chemical species. Our integrated multi-air-pollutant exposure model framework is capable of explaining 91 %–97 % of the variability of measured gaseous air pollutant concentration, with the leave-one-out cross-validation R2 values ranging from 0.73 to 0.93. Using the model framework, the spatial distribution of the concentration of various air pollutants at a spatial resolution of 500 m was generated. The LUR model-derived spatial distribution maps revealed weak-to-moderate spatial correlations between the PM10 chemical species and the criteria of air pollutants, which may help to distinguish their independent chronic health effects. In addition, further improvements in the development of air pollution exposure models are discussed. This study proposed an integrated model framework for estimating multi-air-pollutant exposure in high-density and high-rise urban areas, serving an important tool for multi-air-pollutant exposure assessment in epidemiological studies.</p

    Development and deployment of an improved Anopheles gambiae s.l. field surveillance by adaptive spatial sampling design

    Get PDF
    Introduction: Accurate assessments of vector occurrence and abundance, particularly in widespread vector-borne diseases such as malaria, are crucial for the efficient deployment of disease surveillance and control interventions. Although previous studies have explored the benefits of adaptive sampling for identifying disease hotspots (mostly through simulations), limited research has been conducted on field surveillance of malaria vectors. Methods: We developed and implemented an adaptive spatial sampling design in southwestern Benin, specifically targeting potential and uncertain Anopheles gambiae hotspots, a major malaria vector in sub-Saharan Africa. The first phase of our proposed design involved delineating ecological zones and employing a proportional lattice with close pairs sampling design to maximize spatial coverage, representativeness of ecological zones, and account for spatial dependence in mosquito counts. In the second phase, we employed a spatial adaptive sampling design focusing on high-risk areas with the greatest uncertainty. Results: The adaptive spatial sampling design resulted in a reduced sample size from the first phase, leading to improved predictions for both out-of-sample and training data. Collections of Anopheles gambiae in high-risk and low-uncertainty areas were nearly tripled compared to those in high-risk and high-uncertainty areas. However, the overall model uncertainty increased. Discussion: While the adaptive sampling design allowed for increased collections of Anopheles gambiae mosquitoes with a reduced sample size, it also led to a general increase in uncertainty, highlighting the potential trade-offs in multi-criteria adaptive sampling designs. It is imperative that future research focuses on understanding these trade-offs to expedite effective malaria control and elimination efforts
    • 

    corecore