5,268 research outputs found

    Flood dynamics derived from video remote sensing

    Get PDF
    Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models. Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science

    Using Image Translation To Synthesize Amyloid Beta From Structural MRI

    Get PDF
    Amyloid-beta and brain atrophy are known hallmarks of Alzheimer’s Disease (AD) and can be quantified with positron emission tomography (PET) and structural magnetic resonance imaging (MRI), respectively. PET uses radiotracers that bind to amyloid-beta, whereas MRI can measure brain morphology. PET scans have limitations including cost, invasiveness (involve injections and ionizing radiation exposure), and have limited accessibility, making PET not practical for screening early-onset AD. Conversely, MRI is a cheaper, less-invasive (free from ionizing radiation), and is more widely available, however, it cannot provide the necessary molecular information. There is a known relationship between amyloid-beta and brain atrophy. This thesis aims to synthesize amyloid-beta PET images from structural MRI using image translation, an advanced form of machine learning. The developed models have reported high-similarity metrics between the real and synthetic PET images and high-degree of accuracy in radiotracer quantification. The results are highly impactful as it enables amyloid-beta measurements form every MRI, for free

    Self-supervised learning for transferable representations

    Get PDF
    Machine learning has undeniably achieved remarkable advances thanks to large labelled datasets and supervised learning. However, this progress is constrained by the labour-intensive annotation process. It is not feasible to generate extensive labelled datasets for every problem we aim to address. Consequently, there has been a notable shift in recent times toward approaches that solely leverage raw data. Among these, self-supervised learning has emerged as a particularly powerful approach, offering scalability to massive datasets and showcasing considerable potential for effective knowledge transfer. This thesis investigates self-supervised representation learning with a strong focus on computer vision applications. We provide a comprehensive survey of self-supervised methods across various modalities, introducing a taxonomy that categorises them into four distinct families while also highlighting practical considerations for real-world implementation. Our focus thenceforth is on the computer vision modality, where we perform a comprehensive benchmark evaluation of state-of-the-art self supervised models against many diverse downstream transfer tasks. Our findings reveal that self-supervised models often outperform supervised learning across a spectrum of tasks, albeit with correlations weakening as tasks transition beyond classification, particularly for datasets with distribution shifts. Digging deeper, we investigate the influence of data augmentation on the transferability of contrastive learners, uncovering a trade-off between spatial and appearance-based invariances that generalise to real-world transformations. This begins to explain the differing empirical performances achieved by self-supervised learners on different downstream tasks, and it showcases the advantages of specialised representations produced with tailored augmentation. Finally, we introduce a novel self-supervised pre-training algorithm for object detection, aligning pre-training with downstream architecture and objectives, leading to reduced localisation errors and improved label efficiency. In conclusion, this thesis contributes a comprehensive understanding of self-supervised representation learning and its role in enabling effective transfer across computer vision tasks

    Exposure Fusion for Hand-held Camera Inputs with Optical Flow and PatchMatch

    Full text link
    This paper proposes a hybrid synthesis method for multi-exposure image fusion taken by hand-held cameras. Motions either due to the shaky camera or caused by dynamic scenes should be compensated before any content fusion. Any misalignment can easily cause blurring/ghosting artifacts in the fused result. Our hybrid method can deal with such motions and maintain the exposure information of each input effectively. In particular, the proposed method first applies optical flow for a coarse registration, which performs well with complex non-rigid motion but produces deformations at regions with missing correspondences. The absence of correspondences is due to the occlusions of scene parallax or the moving contents. To correct such error registration, we segment images into superpixels and identify problematic alignments based on each superpixel, which is further aligned by PatchMatch. The method combines the efficiency of optical flow and the accuracy of PatchMatch. After PatchMatch correction, we obtain a fully aligned image stack that facilitates a high-quality fusion that is free from blurring/ghosting artifacts. We compare our method with existing fusion algorithms on various challenging examples, including the static/dynamic, the indoor/outdoor and the daytime/nighttime scenes. Experiment results demonstrate the effectiveness and robustness of our method

    Scalable Exploration of Complex Objects and Environments Beyond Plain Visual Replication​

    Get PDF
    Digital multimedia content and presentation means are rapidly increasing their sophistication and are now capable of describing detailed representations of the physical world. 3D exploration experiences allow people to appreciate, understand and interact with intrinsically virtual objects. Communicating information on objects requires the ability to explore them under different angles, as well as to mix highly photorealistic or illustrative presentations of the object themselves with additional data that provides additional insights on these objects, typically represented in the form of annotations. Effectively providing these capabilities requires the solution of important problems in visualization and user interaction. In this thesis, I studied these problems in the cultural heritage-computing-domain, focusing on the very common and important special case of mostly planar, but visually, geometrically, and semantically rich objects. These could be generally roughly flat objects with a standard frontal viewing direction (e.g., paintings, inscriptions, bas-reliefs), as well as visualizations of fully 3D objects from a particular point of views (e.g., canonical views of buildings or statues). Selecting a precise application domain and a specific presentation mode allowed me to concentrate on the well defined use-case of the exploration of annotated relightable stratigraphic models (in particular, for local and remote museum presentation). My main results and contributions to the state of the art have been a novel technique for interactively controlling visualization lenses while automatically maintaining good focus-and-context parameters, a novel approach for avoiding clutter in an annotated model and for guiding users towards interesting areas, and a method for structuring audio-visual object annotations into a graph and for using that graph to improve guidance and support storytelling and automated tours. We demonstrated the effectiveness and potential of our techniques by performing interactive exploration sessions on various screen sizes and types ranging from desktop devices to large-screen displays for a walk-up-and-use museum installation. KEYWORDS - Computer Graphics, Human-Computer Interaction, Interactive Lenses, Focus-and-Context, Annotated Models, Cultural Heritage Computing

    2023-2024 Catalog

    Get PDF
    The 2023-2024 Governors State University Undergraduate and Graduate Catalog is a comprehensive listing of current information regarding:Degree RequirementsCourse OfferingsUndergraduate and Graduate Rules and Regulation

    Simultaneous Multiparametric and Multidimensional Cardiovascular Magnetic Resonance Imaging

    Get PDF
    No abstract available

    Automatic normalized digital color staining in the recognition of abnormal blood cells using generative adversarial networks

    Get PDF
    Background and Objectives: Combining knowledge of clinical pathologists and deep learning models is a growing trend in morphological analysis of cells circulating in blood to add objectivity, accuracy, and speed in diagnosing hematological and non-hematological diseases. However, the variability in staining protocols across different laboratories can affect the color of images and performance of automatic recognition models. The objective of this work is to develop, train and evaluate a new system for the normalization of color staining of peripheral blood cell images, so that it transforms images from different centers to map the color staining of a reference center (RC) while preserving the structural morphological features. Methods: The system has two modules, GAN1 and GAN2. GAN1 uses the PIX2PIX technique to fade original color images to an adaptive gray, while GAN2 transforms them into RGB normalized images. Both GANs have a similar structure, where the generator is a U-NET convolutional neural network with ResNet and the discriminator is a classifier with ResNet34 structure. Digitally stained images were evaluated using GAN metrics and histograms to assess the ability to modify color without altering cell morphology. The system was also evaluated as a pre-processing tool before cells undergo a classification process. For this purpose, a CNN classifier was designed for three classes: abnormal lymphocytes, blasts and reactive lymphocytes. Results: Training of all GANs and the classifier was performed using RC images, while evaluations were conducted using images from four other centers. Classification tests were performed before and after applying the stain normalization system. The overall accuracy reached a similar value around 96% in both cases for the RC images, indicating the neutrality of the normalization model for the reference images. On the contrary, it was a significant improvement in the classification performance when applying the stain normalization to the other centers. Reactive lymphocytes were the most sensitive to stain normalization, with true positive rates (TPR) increasing from 46.3% - 66% for the original images to 81.2% - 97.2% after digital staining. Abnormal lymphocytes TPR ranged from 31.9% - 95.7% with original images to 83% - 100% with digitally stained images. Blast class showed TPR ranges of 90.3% - 94.4% and 94.4% - 100%, for original and stained images, respectively. Conclusions: The proposed GAN-based normalization staining approach improves the performance of classifiers with multicenter data sets by generating digitally stained images with a quality similar to the original images and adaptability to a reference staining standard. The system requires low computation cost and can help improve the performance of automatic recognition models in clinical settings.This work is part of a research project funded by the Ministry of Science and Innovation of Spain, with reference PID2019-104087RB-I00.Peer ReviewedPostprint (published version
    • …
    corecore