24,533 research outputs found
Recommended from our members
The Epidemiology and Genetic Architecture of Vitamin D Deficiency in African Children
Vitamin D deficiency is a common public health problem worldwide. However, little is known about the epidemiology of vitamin D deficiency in Africa. In this thesis, I aimed to determine: 1) the prevalence of and risk factors associated with vitamin D deficiency in studies conducted in Africa; 2) the prevalence and predictors of vitamin D deficiency in African children; 3) the association between vitamin D and iron deficiency in African children; and 4) genetic variants that influence vitamin D status in Africans.
In a systematic review and meta-analyses of previous vitamin D studies in Africa, the average prevalence of low vitamin D status was 18.5%, 34.2% and 59.5% using cut-offs of 25-hydroxyvitamin D (25(OH)D) levels of <30 nmol/L, <50 nmol/L and <75 nmol/L, respectively. Populations at risk of vitamin D deficiency included newborns, women, and people living in high latitudes or urban areas.
In an epidemiological study of young children living in Africa, the prevalence of low vitamin D status was 0.6%, 7.8% and 44.5% using cut-offs of 25(OH)D levels of GC2 variant of the group-specific component (GC) gene, which encodes vitamin D binding protein.
Vitamin D deficiency was also associated with 80% higher odds of iron deficiency in these children. Adjusted regression models revealed that vitamin D deficiency was associated with higher ferritin and hepcidin levels suggesting lower iron status, and reduced sTfR and transferrin levels and increased TSAT and serum iron levels suggesting improved iron status.
Genome-wide association study (GWAS) in Africans revealed genetic variants that influence vitamin D status in vitamin D metabolism genes: DHCR7/NADSYN1, CYP2R1 and GC. However, the majority of SNPs from previous European GWASs did not replicate in the current GWAS.
Findings from this thesis indicate that vitamin D deficiency is prevalent in many African populations and should be considered in public health strategies in Africa
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Transfer learning for operator selection: A reinforcement learning approach
In the past two decades, metaheuristic optimisation algorithms (MOAs) have been increasingly popular, particularly in logistic, science, and engineering problems. The fundamental characteristics of such algorithms are that they are dependent on a parameter or a strategy. Some online and offline strategies are employed in order to obtain optimal configurations of the algorithms. Adaptive operator selection is one of them, and it determines whether or not to update a strategy from the strategy pool during the search process. In the field of machine learning, Reinforcement Learning (RL) refers to goal-oriented algorithms, which learn from the environment how to achieve a goal. On MOAs, reinforcement learning has been utilised to control the operator selection process. However, existing research fails to show that learned information may be transferred from one problem-solving procedure to another. The primary goal of the proposed research is to determine the impact of transfer learning on RL and MOAs. As a test problem, a set union knapsack problem with 30 separate benchmark problem instances is used. The results are statistically compared in depth. The learning process, according to the findings, improved the convergence speed while significantly reducing the CPU time
Learning disentangled speech representations
A variety of informational factors are contained within the speech signal and a single short recording of speech reveals much more than the spoken words. The best method to extract and represent informational factors from the speech signal ultimately depends on which informational factors are desired and how they will be used. In addition, sometimes methods will capture more than one informational factor at the same time such as speaker identity, spoken content, and speaker prosody.
The goal of this dissertation is to explore different ways to deconstruct the speech signal into abstract representations that can be learned and later reused in various speech technology tasks. This task of deconstructing, also known as disentanglement, is a form of distributed representation learning. As a general approach to disentanglement, there are some guiding principles that elaborate what a learned representation should contain as well as how it should function. In particular, learned representations should contain all of the requisite information in a more compact manner, be interpretable, remove nuisance factors of irrelevant information, be useful in downstream tasks, and independent of the task at hand. The learned representations should also be able to answer counter-factual questions.
In some cases, learned speech representations can be re-assembled in different ways according to the requirements of downstream applications. For example, in a voice conversion task, the speech content is retained while the speaker identity is changed. And in a content-privacy task, some targeted content may be concealed without affecting how surrounding words sound. While there is no single-best method to disentangle all types of factors, some end-to-end approaches demonstrate a promising degree of generalization to diverse speech tasks.
This thesis explores a variety of use-cases for disentangled representations including phone recognition, speaker diarization, linguistic code-switching, voice conversion, and content-based privacy masking. Speech representations can also be utilised for automatically assessing the quality and authenticity of speech, such as automatic MOS ratings or detecting deep fakes. The meaning of the term "disentanglement" is not well defined in previous work, and it has acquired several meanings depending on the domain (e.g. image vs. speech). Sometimes the term "disentanglement" is used interchangeably with the term "factorization". This thesis proposes that disentanglement of speech is distinct, and offers a viewpoint of disentanglement that can be considered both theoretically and practically
Recommended from our members
Vortex identification methods applied to wind turbine tip vortices
This study describes the impact of postprocessing methods on the calculated parameters of tip vortices of a wind turbine model when tested using particle image velocimetry (PIV). Several vortex identification methods and differentiation schemes are compared. The chosen methods are based on two components of the velocity field and their derivatives. They are applied to each instantaneous velocity field from the dataset and also to the calculated average velocity field. The methodologies are compared through the vortex center location, vortex core radius and jittering zone.
Results show that the tip vortex center locations and radius have good comparability and can vary only a few grid spacings between methods. Conversely, the convection velocity and the jittering surface, defined as the area where the instantaneous vortex centers are located, vary between identification methods.
Overall, the examined parameters depend significantly on the postprocessing method and selected vortex identification criteria. Therefore, this study proves that the selection of the most suitable postprocessing methods of PIV data is pivotal to ensure robust results
Towards a more just refuge regime: quotas, markets and a fair share
The international refugee regime is beset by two problems: Responsibility for refuge falls
disproportionately on a few states and many owed refuge do not get it. In this work, I explore
remedies to these problems. One is a quota distribution wherein states are distributed
responsibilities via allotment. Another is a marketized quota system wherein states are free to buy
and sell their allotments with others. I explore these in three parts. In Part 1, I develop the prime
principles upon which a just regime is built and with which alternatives can be adjudicated. The
first and most important principle – ‘Justice for Refugees’ – stipulates that a just regime provides
refuge for all who have a basic interest in it. The second principle – ‘Justice for States’ – stipulates
that a just distribution of refuge responsibilities among states is one that is capacity considerate. In
Part 2, I take up several vexing questions regarding the distribution of refuge responsibilities
among states in a collective effort. First, what is a state’s ‘fair share’? The answer requires the
determination of some logic – some metric – with which a distribution is determined. I argue that
one popular method in the political theory literature – a GDP-based distribution – is normatively
unsatisfactory. In its place, I posit several alternative metrics that are more attuned with the
principles of justice but absent in the political theory literature: GDP adjusted for Purchasing
Power Parity and the Human Development Index. I offer an exploration of both these. Second,
are states required to ‘take up the slack’ left by defaulting peers? Here, I argue that duties of help
remain intact in cases of partial compliance among states in the refuge regime, but that political
concerns may require that such duties be applied with caution. I submit that a market instrument
offers one practical solution to this problem, as well as other advantages. In Part 3, I take aim at
marketization and grapple with its many pitfalls: That marketization is commodifying, that it is
corrupting, and that it offers little advantage in providing quality protection for refugees. In
addition to these, I apply a framework of moral markets developed by Debra Satz. I argue that a
refuge market may satisfy Justice Among States, but that it is violative of the refugees’ welfare
interest in remaining free of degrading and discriminatory treatment
Detection of Hyperpartisan news articles using natural language processing techniques
Yellow journalism has increased the spread of hyperpartisan news on the internet. It is very difficult for online news article readers to distinguish hyperpartisan news articles from mainstream news articles. There is a need for an automated model that can detect hyperpartisan news on the internet and tag them as hyperpartisan so that it is very easy for readers to avoid that news. A hyperpartisan news detection article was developed by using three different natural language processing techniques named BERT, ELMo, and Word2vec. This research used the bi-article dataset published at SEMEVAL-2019. The ELMo word embeddings which are trained on a Random forest classifier has got an accuracy of 0.88, which is much better than other state of art models. The BERT and Word2vec models have got the same accuracy of 0.83. This research tried different sentence input lengths to BERT and proved that BERT can extract context from local words. Evidenced from the described ML models, this study will assist the governments, news’ readers, and other political stakeholders to detect any hyperpartisan news, and also helps policy to track, and regulate, misinformation about the political parties and their leaders
Increased lifetime of Organic Photovoltaics (OPVs) and the impact of degradation, efficiency and costs in the LCOE of Emerging PVs
Emerging photovoltaic (PV) technologies such as organic photovoltaics (OPVs) and perovskites (PVKs) have the potential to disrupt the PV market due to their ease of fabrication (compatible with cheap roll-to-roll processing) and installation, as well as their significant efficiency improvements in recent years. However, rapid degradation is still an issue present in many emerging PVs, which must be addressed to enable their commercialisation. This thesis shows an OPV lifetime enhancing technique by adding the insulating polymer PMMA to the active layer, and a novel model for quantifying the impact of degradation (alongside efficiency and cost) upon levelized cost of energy (LCOE) in real world emerging PV installations.
The effect of PMMA morphology on the success of a ternary strategy was investigated, leading to device design guidelines. It was found that either increasing the weight percent (wt%) or molecular weight (MW) of PMMA resulted in an increase in the volume of PMMA-rich islands, which provided the OPV protection against water and oxygen ingress. It was also found that adding PMMA can be effective in enhancing the lifetime of different active material combinations, although not to the same extent, and that processing additives can have a negative impact in the devices lifetime.
A novel model was developed taking into account realistic degradation profile sourced from a literature review of state-of-the-art OPV and PVK devices. It was found that optimal strategies to improve LCOE depend on the present characteristics of a device, and that panels with a good balance of efficiency and degradation were better than panels with higher efficiency but higher degradation as well. Further, it was found that low-cost locations were more favoured from reductions in the degradation rate and module cost, whilst high-cost locations were more benefited from improvements in initial efficiency, lower discount rates and reductions in install costs
Exploring the Structure of Scattering Amplitudes in Quantum Field Theory: Scattering Equations, On-Shell Diagrams and Ambitwistor String Models in Gauge Theory and Gravity
In this thesis I analyse the structure of scattering amplitudes in super-symmetric gauge and gravitational theories in four dimensional spacetime, starting with a detailed review of background material accessible to a non-expert. I then analyse the 4D scattering equations, developing the theory of how they can be used to express scattering amplitudes at tree level. I go on to explain how the equations can be solved numerically using a Monte Carlo algorithm, and introduce my Mathematica package treeamps4dJAF which performs these calculations. Next I analyse the relation between the 4D scattering equations and on-shell diagrams in N = 4 super Yang-Mills, which provides a new perspective on the tree level amplitudes of the theory. I apply a similar analysis to N = 8 supergravity, developing the theory of on-shell diagrams to derive new Grassmannian integral formulae for the amplitudes of the theory. In both theories I derive a new worldsheet expression for the 4 point one loop amplitude supported on 4D scattering equations. Finally I use 4D ambitwistor string theory to analyse scattering amplitudes in N = 4 conformal supergravity, deriving new worldsheet formulae for both plane wave and non-plane wave amplitudes supported on 4D scattering equations. I introduce a new prescription to calculate the derivatives of on-shell variables with respect to momenta, and I use this to show that certain non-plane wave amplitudes can be calculated as momentum derivatives of amplitudes with plane wave states
Unraveling the effect of sex on human genetic architecture
Sex is arguably the most important differentiating characteristic in most mammalian
species, separating populations into different groups, with varying behaviors, morphologies,
and physiologies based on their complement of sex chromosomes, amongst other factors. In
humans, despite males and females sharing nearly identical genomes, there are differences
between the sexes in complex traits and in the risk of a wide array of diseases. Sex provides
the genome with a distinct hormonal milieu, differential gene expression, and environmental
pressures arising from gender societal roles. This thus poses the possibility of observing
gene by sex (GxS) interactions between the sexes that may contribute to some of the
phenotypic differences observed. In recent years, there has been growing evidence of GxS,
with common genetic variation presenting different effects on males and females. These
studies have however been limited in regards to the number of traits studied and/or
statistical power. Understanding sex differences in genetic architecture is of great
importance as this could lead to improved understanding of potential differences in
underlying biological pathways and disease etiology between the sexes and in turn help
inform personalised treatments and precision medicine.
In this thesis we provide insights into both the scope and mechanism of GxS across the
genome of circa 450,000 individuals of European ancestry and 530 complex traits in the UK
Biobank. We found small yet widespread differences in genetic architecture across traits
through the calculation of sex-specific heritability, genetic correlations, and sex-stratified
genome-wide association studies (GWAS). We further investigated whether sex-agnostic
(non-stratified) efforts could potentially be missing information of interest, including sex-specific trait-relevant loci and increased phenotype prediction accuracies. Finally, we
studied the potential functional role of sex differences in genetic architecture through sex
biased expression quantitative trait loci (eQTL) and gene-level analyses.
Overall, this study marks a broad examination of the genetics of sex differences. Our findings
parallel previous reports, suggesting the presence of sexual genetic heterogeneity across
complex traits of generally modest magnitude. Furthermore, our results suggest the need to
consider sex-stratified analyses in future studies in order to shed light into possible sex-specific molecular mechanisms
- …