1,931 research outputs found
Flood dynamics derived from video remote sensing
Flooding is by far the most pervasive natural hazard, with the human impacts of floods expected to worsen in the coming decades due to climate change. Hydraulic models are a key tool for understanding flood dynamics and play a pivotal role in unravelling the processes that occur during a flood event, including inundation flow patterns and velocities. In the realm of river basin dynamics, video remote sensing is emerging as a transformative tool that can offer insights into flow dynamics and thus, together with other remotely sensed data, has the potential to be deployed to estimate discharge. Moreover, the integration of video remote sensing data with hydraulic models offers a pivotal opportunity to enhance the predictive capacity of these models.
Hydraulic models are traditionally built with accurate terrain, flow and bathymetric data and are often calibrated and validated using observed data to obtain meaningful and actionable model predictions. Data for accurately calibrating and validating hydraulic models are not always available, leaving the assessment of the predictive capabilities of some models deployed in flood risk management in question. Recent advances in remote sensing have heralded the availability of vast video datasets of high resolution. The parallel evolution of computing capabilities, coupled with advancements in artificial intelligence are enabling the processing of data at unprecedented scales and complexities, allowing us to glean meaningful insights into datasets that can be integrated with hydraulic models. The aims of the research presented in this thesis were twofold. The first aim was to evaluate and explore the potential applications of video from air- and space-borne platforms to comprehensively calibrate and validate two-dimensional hydraulic models. The second aim was to estimate river discharge using satellite video combined with high resolution topographic data. In the first of three empirical chapters, non-intrusive image velocimetry techniques were employed to estimate river surface velocities in a rural catchment. For the first time, a 2D hydraulicvmodel was fully calibrated and validated using velocities derived from Unpiloted Aerial Vehicle (UAV) image velocimetry approaches. This highlighted the value of these data in mitigating the limitations associated with traditional data sources used in parameterizing two-dimensional hydraulic models. This finding inspired the subsequent chapter where river surface velocities, derived using Large Scale Particle Image Velocimetry (LSPIV), and flood extents, derived using deep neural network-based segmentation, were extracted from satellite video and used to rigorously assess the skill of a two-dimensional hydraulic model. Harnessing the ability of deep neural networks to learn complex features and deliver accurate and contextually informed flood segmentation, the potential value of satellite video for validating two dimensional hydraulic model simulations is exhibited. In the final empirical chapter, the convergence of satellite video imagery and high-resolution topographical data bridges the gap between visual observations and quantitative measurements by enabling the direct extraction of velocities from video imagery, which is used to estimate river discharge. Overall, this thesis demonstrates the significant potential of emerging video-based remote sensing datasets and offers approaches for integrating these data into hydraulic modelling and discharge estimation practice. The incorporation of LSPIV techniques into flood modelling workflows signifies a methodological progression, especially in areas lacking robust data collection infrastructure. Satellite video remote sensing heralds a major step forward in our ability to observe river dynamics in real time, with potentially significant implications in the domain of flood modelling science
Using Image Translation To Synthesize Amyloid Beta From Structural MRI
Amyloid-beta and brain atrophy are known hallmarks of Alzheimer’s Disease (AD) and can be quantified with positron emission tomography (PET) and structural magnetic resonance imaging (MRI), respectively. PET uses radiotracers that bind to amyloid-beta, whereas MRI can measure brain morphology. PET scans have limitations including cost, invasiveness (involve injections and ionizing radiation exposure), and have limited accessibility, making PET not practical for screening early-onset AD. Conversely, MRI is a cheaper, less-invasive (free from ionizing radiation), and is more widely available, however, it cannot provide the necessary molecular information. There is a known relationship between amyloid-beta and brain atrophy. This thesis aims to synthesize amyloid-beta PET images from structural MRI using image translation, an advanced form of machine learning. The developed models have reported high-similarity metrics between the real and synthetic PET images and high-degree of accuracy in radiotracer quantification. The results are highly impactful as it enables amyloid-beta measurements form every MRI, for free
Multimodal MRI analysis using deep learning methods
Magnetic resonance imaging (MRI) has been widely used in scientific and clinical research. It is a non-invasive medical imaging technique that reveals anatomical structures and provides useful information for investigators to explore aging and pathological processes. Different MR modalities offer different useful properties. Automatic MRI analysis algorithms have been developed to address problems in many applications such as classification, segmentation, and disease diagnosis. Segmentation and labeling algorithms applied to brain MRIs enable evaluations of the volumetric changes of specific structures in neurodegenerative diseases. Reconstruction of fiber orientations using diffusion MRI is beneficial to obtain better understanding of the underlying structures.
In this thesis, we focused on development of deep learning methods for MRI analysis using different image modalities. Specifically, we applied deep learning techniques on different applications, including segmentation of brain structures and reconstruction of tongue muscle fiber orientations. For segmentation of brain structures, we developed an end-to-end deep learning algorithm for ventricle parcellation of brains with ventriculomegaly using T1-w MR images. The deep network provides robust and accurate segmentation results in subjects with high variability in ventricle shapes and sizes. We developed another deep learning method to automatically parcellate the thalamus into a set of thalamic nuclei using T1-w MRI and features from diffusion MRI. The algorithm incorporates a harmonization step to make the network adapt to input images with different contrasts.
We also studied the strains associated with tongue muscles during speech production using multiple MRI modalities. To enable this study, we first developed a deep network to reconstruct crossing tongue muscle fiber orientations using diffusion MRI. The network was specifically designed for the human tongue and accounted for the orthogonality property of the tongue muscles. Next, we proposed a comprehensive pipeline to analyze the strains associated with tongue muscle fiber orientations during speech using diffusion MRI, and tagged and cine MRI. The proposed pipeline provides a solution to analyze the cooperation between muscle groups during speech production
Medical Image Analysis using Deep Relational Learning
In the past ten years, with the help of deep learning, especially the rapid
development of deep neural networks, medical image analysis has made remarkable
progress. However, how to effectively use the relational information between
various tissues or organs in medical images is still a very challenging
problem, and it has not been fully studied. In this thesis, we propose two
novel solutions to this problem based on deep relational learning. First, we
propose a context-aware fully convolutional network that effectively models
implicit relation information between features to perform medical image
segmentation. The network achieves the state-of-the-art segmentation results on
the Multi Modal Brain Tumor Segmentation 2017 (BraTS2017) and Multi Modal Brain
Tumor Segmentation 2018 (BraTS2018) data sets. Subsequently, we propose a new
hierarchical homography estimation network to achieve accurate medical image
mosaicing by learning the explicit spatial relationship between adjacent
frames. We use the UCL Fetoscopy Placenta dataset to conduct experiments and
our hierarchical homography estimation network outperforms the other
state-of-the-art mosaicing methods while generating robust and meaningful
mosaicing result on unseen frames.Comment: arXiv admin note: substantial text overlap with arXiv:2007.0778
Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning
Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and
direct surgical procedures, and to track the development of bone-related diseases. This often
involves radiologists who have to annotate bones manually or in a semi-automatic way, which is
a time consuming task. Their annotation workload can be reduced by automated segmentation
and detection of individual bones. This automation of distinct bone segmentation not only has
the potential to accelerate current workflows but also opens up new possibilities for processing
and presenting medical data for planning, navigation, and education.
In this thesis, we explored the use of deep learning for automating the segmentation of all
individual bones within an upper-body CT scan. To do so, we had to find a network architec-
ture that provides a good trade-off between the problem’s high computational demands and the
results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out
to eliminate the most prevalent types of error. To do so, we introduced an novel method called
binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin-
guishing bone from non-bone is conducted separately from identifying the individual bones.
Both predictions are then merged, which leads to superior results. Another type of error is tack-
led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger
fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input
into the network while keeping the growth of additional pixels in check.
Overall, we present a deep-learning-based method that reliably segments most of the over
one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter
quickly enough to be used in interactive software. Our algorithm has been included in our
groups virtual reality medical image visualisation software SpectoVR with the plan to be used
as one of the puzzle piece in surgical planning and navigation, as well as in the education of
future doctors
- …