6,189 research outputs found
Beam scanning by liquid-crystal biasing in a modified SIW structure
A fixed-frequency beam-scanning 1D antenna based on Liquid Crystals (LCs) is designed for application in 2D scanning with lateral alignment. The 2D array environment imposes full decoupling of adjacent 1D antennas, which often conflicts with the LC requirement of DC biasing: the proposed design accommodates both. The LC medium is placed inside a Substrate Integrated Waveguide (SIW) modified to work as a Groove Gap Waveguide, with radiating slots etched on the upper broad wall, that radiates as a Leaky-Wave Antenna (LWA). This allows effective application of the DC bias voltage needed for tuning the LCs. At the same time, the RF field remains laterally confined, enabling the possibility to lay several antennas in parallel and achieve 2D beam scanning. The design is validated by simulation employing the actual properties of a commercial LC medium
Novel 129Xe Magnetic Resonance Imaging and Spectroscopy Measurements of Pulmonary Gas-Exchange
Gas-exchange is the primary function of the lungs and involves removing carbon dioxide from the body and exchanging it within the alveoli for inhaled oxygen. Several different pulmonary, cardiac and cardiovascular abnormalities have negative effects on pulmonary gas-exchange. Unfortunately, clinical tests do not always pinpoint the problem; sensitive and specific measurements are needed to probe the individual components participating in gas-exchange for a better understanding of pathophysiology, disease progression and response to therapy.
In vivo Xenon-129 gas-exchange magnetic resonance imaging (129Xe gas-exchange MRI) has the potential to overcome these challenges. When participants inhale hyperpolarized 129Xe gas, it has different MR spectral properties as a gas, as it diffuses through the alveolar membrane and as it binds to red-blood-cells. 129Xe MR spectroscopy and imaging provides a way to tease out the different anatomic components of gas-exchange simultaneously and provides spatial information about where abnormalities may occur.
In this thesis, I developed and applied 129Xe MR spectroscopy and imaging to measure gas-exchange in the lungs alongside other clinical and imaging measurements. I measured 129Xe gas-exchange in asymptomatic congenital heart disease and in prospective, controlled studies of long-COVID. I also developed mathematical tools to model 129Xe MR signals during acquisition and reconstruction. The insights gained from my work underscore the potential for 129Xe gas-exchange MRI biomarkers towards a better understanding of cardiopulmonary disease. My work also provides a way to generate a deeper imaging and physiologic understanding of gas-exchange in vivo in healthy participants and patients with chronic lung and heart disease
Optimising water quality outcomes for complex water resource systems and water grids
As the world progresses, water resources are likely to be subjected to much greater pressures than in the past. Even though the principal water problem revolves around inadequate and uncertain water supplies, water quality management plays an equally important role. Availability of good quality water is paramount to sustainability of human population as well as the environment. Achieving water quality and quantity objectives can be conflicting and becomes more complicated with challenges like, climate change, growing populations and changed land uses. Managing adequate water quality in a reservoir gets complicated by multiple inflows with different water quality levels often resulting in poor water quality. Hence, it is fundamental to approach this issue in a more systematic, comprehensive, and coordinated fashion. Most previous studies related to water resources management focused on water quantity and considered water quality separately. However, this research study focused on considering water quantity and quality objectives simultaneously in a single model to explore and understand the relationship between them in a reservoir system. A case study area was identified in Western Victoria, Australia with water quantity and quality challenges. Taylors Lake of Grampians System in Victoria, Australia receives water from multiple sources of differing quality and quantity and has the abovesaid problems. A combined simulation and optimisation approach was adopted to carry out the analysis. A multi-objective optimisation approach was applied to achieve optimal water availability and quality in the storage. The multi-objective optimisation model included three objective functions which were: water volume and two water quality parameters: salinity and turbidity. Results showed competing nature of water quantity and quality objectives and established the trade-offs. It further showed that it was possible to generate a range of optimal solutions to effectively manage those trade-offs. The trade-off analysis explored and informed that selective harvesting of inflows is effective to improve water quality in storage. However, with strict water quality restriction there is a considerable loss in water volume. The robustness of the optimisation approach used in this study was confirmed through sensitivity and uncertainty analysis. The research work also incorporated various spatio-temporal scenario analyses to systematically articulate long-term and short-term operational planning strategies. Operational decisions around possible harvesting regimes while achieving optimal water quantity and quality and meeting all water demands were established. The climate change analysis revealed that optimal management of water quantity and quality in storage became extremely challenging under future climate projections. The high reduction in storage volume in the future will lead to several challenges such as water supply shortfall and inability to undertake selective harvesting due to reduced water quality levels. In this context, selective harvesting of inflows based on water quality will no longer be an option to manage water quantity and quality optimally in storage. Some significant conclusions of this research work included the establishment of trade-offs between water quality and quantity objectives particular to this configuration of water supply system. The work demonstrated that selective harvesting of inflows will improve the stored water quality, and this finding along with the approach used is a significant contribution to decision makers working within the water sector. The simulation-optimisation approach is very effective in providing a range of optimal solutions, which can be used to make more informed decisions around achieving optimal water quality and quantity in storage. It was further demonstrated that there are range of planning periods, both long-term (>10 years) and short-term (<1 year), all of which offer distinct advantages and provides useful insights, making this an additional key contribution of the work. Importantly, climate change was also considered where it was found that diminishing water resources, particularly to this geographic location, makes it increasingly difficult to optimise both quality and quantity in storage providing further useful insights from this work.Doctor of Philosoph
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Designing and Expanding Electrical Networks – Complexity and Combinatorial Algorithms
The transition from conventional to renewable power generation has a large impact on when and where electricity is generated. To deal with this change the electric transmission network needs to be adapted and expanded.
Expanding the network has two benefits. Electricity can be generated at locations with high renewable energy potentials and then transmitted to the consumers via the transmission network.
Without the expansion the existing transmission network may be unable to cope with the transmission needs, thus requiring power generation at locations closer to the energy demand, but at less well-suited locations. Second, renewable energy generation (e.g., from wind or solar irradiation) is typically volatile. Having strong interconnections between regions within a large geographical area allows to the smooth the generation and demand over that area. This smoothing makes them more predictable and the volatility of the generation easier to handle.
In this thesis we consider problems that arise when designing and expanding electric transmission networks. As the first step we formalize them such that we have a precise mathematical problem formulation. Afterwards, we pursue two goals: first, improve the theoretical understanding of these problems by determining their computational complexity under various restrictions, and second, develop algorithms that can solve these problems.
A basic formulation of the expansion planning problem models the network as a graph and potential new transmission lines as edges that may be added to the graph. We formalize this formulation as the problems Flow Expansion and Electrical Flow Expansion, which differ in the flow model (graph-theoretical vs. electrical flow). We prove that in general the decision variants of these problems are -complete, even if the network structure is already very simple, e.g., a star. For certain restrictions, we give polynomial-time algorithms as well. Our results delineate the boundary between the -complete cases and the cases that can be solved in polynomial time.
The basic expansion planning problems mentioned above ignore that real transmission networks should still be able to operate if a small part of the transmission equipment fails. We employ a criticality measure from the literature, which measures the dynamic effects of the failure of a single transmission line on the whole transmission network. In a first step, we compare this criticality measure to the well-used criterion.
Moreover, we formulate this criticality measure as a set of linear inequalities, which may be added to any formulation of a network design problem as a mathematical program. To exemplify this usage, we introduce the criticality criterion in two transmission network expansion planning problems, which can be formulated as mixed-integer linear programs (MILPs). We then evaluate the performance of solving the MILPs. Finally, we develop a greedy heuristic for one of the two problems, and compare its performance to solving the MILP.
Microgrids play an important role in the electrification of rural areas. We formalize the design of the cable layout of a microgrid as a geometric optimization problem, which we call Microgrid Cable Layout. A key difference to the network design problems above is that there is no graph with candidate edges given. Instead, edges and new vertices may be placed anywhere in the plane. We present a hybrid genetic algorithm for Microgrid Cable Layout and evaluate it on a set of benchmark instances, which include a real microgrid in the Democratic Republic of the Congo.
Finally, instead of expanding electrical networks one may place electric equipment such as FACTS (flexible AC transmission system). These influence the properties of the transmission lines such that the network can be used more efficiently. We apply a model of FACTS from the literature and study the problem whether a given network with given positions and properties of the FACTS admits an electrical flow provided that FACTS are set appropriately.
We call such a flow a FACTS flow. In this thesis we prove that in general it is -complete to determine
whether a network admits a FACTS flow, and we present polynomial-time algorithms for two restricted cases
On noise, uncertainty and inference for computational diffusion MRI
Diffusion Magnetic Resonance Imaging (dMRI) has revolutionised the way brain microstructure and connectivity can be studied. Despite its unique potential in mapping the whole brain, biophysical properties are inferred from measurements rather than being directly observed. This indirect mapping from noisy data creates challenges and introduces uncertainty in the estimated properties. Hence, dMRI frameworks capable to deal with noise and uncertainty quantification are of great importance and are the topic of this thesis.
First, we look into approaches for reducing uncertainty, by de-noising the dMRI signal. Thermal noise can have detrimental effects for modalities where the information resides in the signal attenuation, such as dMRI, that has inherently low-SNR data. We highlight the dual effect of noise, both in increasing variance, but also introducing bias. We then design a framework for evaluating denoising approaches in a principled manner. By setting objective criteria based on what a well-behaved denoising algorithm should offer, we provide a bespoke dataset and a set of evaluations. We demonstrate that common magnitude-based denoising approaches usually reduce noise-related variance from the signal, but do not address the bias effects introduced by the noise floor. Our framework also allows to better characterise scenarios where denoising can be beneficial (e.g. when done in complex domain) and can open new opportunities, such as pushing spatio-temporal resolution boundaries.
Subsequently, we look into approaches for mapping uncertainty and design two inference frameworks for dMRI models, one using classical Bayesian methods and another using more recent data-driven algorithms. In the first approach, we build upon the univariate random-walk Metropolis-Hastings MCMC, an extensively used sampling method to sample from the posterior distribution of model parameters given the data. We devise an efficient adaptive multivariate MCMC scheme, relying upon the assumption that groups of model parameters can be jointly estimated if a proper covariance matrix is defined. In doing so, our algorithm increases the sampling efficiency, while preserving accuracy and precision of estimates. We show results using both synthetic and in-vivo dMRI data.
In the second approach, we resort to Simulation-Based Inference (SBI), a data-driven approach that avoids the need for iterative model inversions. This is achieved by using neural density estimators to learn the inverse mapping from the forward generative process (simulations) to the parameters of interest that have generated those simulations. By addressing the problem via learning approaches offers the opportunity to achieve inference amortisation, boosting efficiency by avoiding the necessity of repeating the inference process for each new unseen dataset. It also allows inversion of forward processes (i.e. a series of processing steps) rather than only models. We explore different neural network architectures to perform conditional density estimation of the posterior distribution of parameters. Results and comparisons obtained against MCMC suggest speed-ups of 2-3 orders of magnitude in the inference process while keeping the accuracy in the estimates
Development of Novel Nano Platforms and Machine Learning Approaches for Raman Spectroscopy
In Raman spectroscopy, data analysis occupies a large amount of time and effort; thus, it is paramount to have the proper tools to extract the most meaning from the Raman analysis. This thesis explores improved ways to analyse Raman data mostly by using machine learning techniques available in Python. The substrate used throughout this thesis has been patterned through an electrohydrodynamic process that patterns micrometric pillars onto the substrate, which, after being gold coated, can generate surface-enhanced Raman scattering. An initial theoretical background was laid for the electrohydrodynamic process and additional observations regarding the fluid mechanics. Furthermore, when the structures are fabricated, and Raman measurements are taken, we show that it is possible to create an effective convolutional neural networks that systematically evaluate these patterns’ surface morphology and extracts features responsible for the surface-enhanced Raman scattering phenomenon. Being able to predict 90% of the time from optical microscope images and 99% of the time with atomic force microscopy images Additionally, a thorough machine learning analysis of the Raman literature was done. The best machine learning algorithms were put together into a script combined with a graphical user Interface that can run multiple commands such as principal component analysis and self-organizing maps, all in a centralised way. This way, we managed to consistently extract information from Raman and surface-enhanced Raman scattering spectra to open possibilities for precise peak analysis methods using a multi-Lorentzian fit algorithm
Graphon Estimation in bipartite graphs with observable edge labels and unobservable node labels
Many real-world data sets can be presented in the form of a matrix whose
entries correspond to the interaction between two entities of different natures
(number of times a web user visits a web page, a student's grade in a subject,
a patient's rating of a doctor, etc.). We assume in this paper that the
mentioned interaction is determined by unobservable latent variables describing
each entity. Our objective is to estimate the conditional expectation of the
data matrix given the unobservable variables. This is presented as a problem of
estimation of a bivariate function referred to as graphon. We study the cases
of piecewise constant and H\"older-continuous graphons. We establish finite
sample risk bounds for the least squares estimator and the exponentially
weighted aggregate. These bounds highlight the dependence of the estimation
error on the size of the data set, the maximum intensity of the interactions,
and the level of noise. As the analyzed least-squares estimator is intractable,
we propose an adaptation of Lloyd's alternating minimization algorithm to
compute an approximation of the least-squares estimator. Finally, we present
numerical experiments in order to illustrate the empirical performance of the
graphon estimator on synthetic data sets
- …