18,982 research outputs found
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Gamification in E-Learning: game factors to strengthen specific English pronunciation features in undergraduate students at UPTC Sogamoso
Appendix A Characterization survey (104), Appendix B. EFL Students’ questionnaire (109), Appendix C. Characterization survey: data treatment question (113), Appendix D. Informed consent letter, English version (114), Appendix E. Carta de consentimiento informado, versión en español (117), Appendix F. Time Schedule (120), Appendix G. Sample Challenges at Moodle (126), Appendix H. Participants’ questionnaire results (128).La gamificación es un término que suele denotar el uso de componentes del juego en situaciones no relacionadas con el juego en sà para crear experiencias de aprendizaje agradables, divertidas y motivadoras para los estudiantes (Werbach y Hunter, 2012). Por lo tanto, el análisis de los factores básicos de los juegos se convierte en algo esencial a la hora de definir y utilizar la gamificación como estrategia de mediación del inglés como lengua extranjera para fortalecer rasgos especÃficos de pronunciación en los estudiantes de pregrado de la UPTC Sogamoso.
El procedimiento de estudio se basa en la investigación acción mediante la implementación de la estrategia de gamificación para la mediación en la pronunciación del inglés, orientada a treinta estudiantes de diferentes programas de ingenierÃa, administración y tecnologÃa con niveles heterogéneos de dominio del inglés. Las actividades se centran principalmente en la producción de sonidos, el ritmo, el acento y la entonación, los rasgos de pronunciación segmental y suprasegmental.
Los resultados arrojaron una evidente mejora en las caracterÃsticas segméntales y suprasegmentales de la percepción en la pronunciación de los participantes asà como la contribución del objetivo de los juegos a la instrucción fonética y fonológica, la sensación en el juego a la motivación para mejorar la pronunciación, el reto establecido en los juegos a la actitud positiva de los participantes, y la sociabilidad a la exposición practica de la pronunciación inglesa.Gamification is a relatively new term that often denotes the use of game components in situations unrelated to the game itself to create enjoyable, fun, and motivating learning experiences for students (Werbach and Hunter, 2012). Therefore, analyzing the games' basic factors becomes essential when defining and using gamification as a strategy for English as Foreign Language mediation to strengthen specific pronunciation features in UPTC Sogamoso undergraduate students.
The study procedure is based on action research by implementing the gamification strategy for mediation in English pronunciation, oriented to thirty students from different engineering, management, and technology programs at heterogeneous levels of English proficiency. The activities mainly focus on sound production, rhythm, stress, and intonation, segmental and suprasegmental pronunciation features.
The results showed an evident improvement in the segmental and suprasegmental features of the participants' pronunciation perception as well as the contribution of game goals to phonetics and phonological instruction, the game sensation to the motivation for pronunciation improvement, the game challenge to the participants' positive attitude, and the sociality to the English pronunciation exposure practice
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
Data-to-text generation with neural planning
In this thesis, we consider the task of data-to-text generation, which takes non-linguistic
structures as input and produces textual output. The inputs can take the form of
database tables, spreadsheets, charts, and so on. The main application of data-to-text
generation is to present information in a textual format which makes it accessible to
a layperson who may otherwise find it problematic to understand numerical figures.
The task can also automate routine document generation jobs, thus improving human
efficiency. We focus on generating long-form text, i.e., documents with multiple paragraphs. Recent approaches to data-to-text generation have adopted the very successful
encoder-decoder architecture or its variants. These models generate fluent (but often
imprecise) text and perform quite poorly at selecting appropriate content and ordering
it coherently. This thesis focuses on overcoming these issues by integrating content
planning with neural models. We hypothesize data-to-text generation will benefit from
explicit planning, which manifests itself in (a) micro planning, (b) latent entity planning, and (c) macro planning. Throughout this thesis, we assume the input to our
generator are tables (with records) in the sports domain. And the output are summaries
describing what happened in the game (e.g., who won/lost, ..., scored, etc.).
We first describe our work on integrating fine-grained or micro plans with data-to-text generation. As part of this, we generate a micro plan highlighting which records
should be mentioned and in which order, and then generate the document while taking
the micro plan into account.
We then show how data-to-text generation can benefit from higher level latent entity planning. Here, we make use of entity-specific representations which are dynam ically updated. The text is generated conditioned on entity representations and the
records corresponding to the entities by using hierarchical attention at each time step.
We then combine planning with the high level organization of entities, events, and
their interactions. Such coarse-grained macro plans are learnt from data and given
as input to the generator. Finally, we present work on making macro plans latent
while incrementally generating a document paragraph by paragraph. We infer latent
plans sequentially with a structured variational model while interleaving the steps of
planning and generation. Text is generated by conditioning on previous variational
decisions and previously generated text.
Overall our results show that planning makes data-to-text generation more interpretable, improves the factuality and coherence of the generated documents and re duces redundancy in the output document
Underwater optical wireless communications in turbulent conditions: from simulation to experimentation
Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel.
This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated.
Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link.
Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions
Unraveling the effect of sex on human genetic architecture
Sex is arguably the most important differentiating characteristic in most mammalian
species, separating populations into different groups, with varying behaviors, morphologies,
and physiologies based on their complement of sex chromosomes, amongst other factors. In
humans, despite males and females sharing nearly identical genomes, there are differences
between the sexes in complex traits and in the risk of a wide array of diseases. Sex provides
the genome with a distinct hormonal milieu, differential gene expression, and environmental
pressures arising from gender societal roles. This thus poses the possibility of observing
gene by sex (GxS) interactions between the sexes that may contribute to some of the
phenotypic differences observed. In recent years, there has been growing evidence of GxS,
with common genetic variation presenting different effects on males and females. These
studies have however been limited in regards to the number of traits studied and/or
statistical power. Understanding sex differences in genetic architecture is of great
importance as this could lead to improved understanding of potential differences in
underlying biological pathways and disease etiology between the sexes and in turn help
inform personalised treatments and precision medicine.
In this thesis we provide insights into both the scope and mechanism of GxS across the
genome of circa 450,000 individuals of European ancestry and 530 complex traits in the UK
Biobank. We found small yet widespread differences in genetic architecture across traits
through the calculation of sex-specific heritability, genetic correlations, and sex-stratified
genome-wide association studies (GWAS). We further investigated whether sex-agnostic
(non-stratified) efforts could potentially be missing information of interest, including sex-specific trait-relevant loci and increased phenotype prediction accuracies. Finally, we
studied the potential functional role of sex differences in genetic architecture through sex
biased expression quantitative trait loci (eQTL) and gene-level analyses.
Overall, this study marks a broad examination of the genetics of sex differences. Our findings
parallel previous reports, suggesting the presence of sexual genetic heterogeneity across
complex traits of generally modest magnitude. Furthermore, our results suggest the need to
consider sex-stratified analyses in future studies in order to shed light into possible sex-specific molecular mechanisms
How to Be a God
When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers.
Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong.
Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice.
That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer.
The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves?
How should we be gods
The Role of English and Welsh INGOs: A Field Theory-Based Exploration of the Sector
This thesis takes a field theory-based approach to exploring the role of English and Welsh international non-governmental organisations (INGOs), using the lens of income source form.
First, the thesis presents new income source data drawn from 933 Annual Accounts published by 316 INGOs over three years (2015-2018). The research then draws on qualitative data from 90 Leaders' letters include within the Annual Reports published by 39 INGOS, as well as supplementary quantitative and qualitative data, to explore the ways in which INGOs represent their role.
Analysis of this income source data demonstrates that government funding is less important to most INGOs than has previously been assumed, while income from individuals is more important than has been recognised in the extant development studies literature. Funding from other organisations within the voluntary sector is the third most important source of income for these INGOs, while income from fees and trading is substantially less important than the other income source forms.
Using this income source data in concert with other quantitative data on INGO characteristics as well as qualitative data drawn from the Leaders' letters, I then show that the English and Welsh INGO sector is a heterogenous space, divided into multiple fields. The set of fields identified by this thesis is arranged primarily around income source form, which is also associated with size, religious affiliation, and activities of focus and ways of working. As Bourdieusian field theory suggests, within these fields individual INGOs are engaged in an ongoing struggle for position: competing to demonstrate their maximal possession of the symbolic capitals they perceive to be valued by (potential) donors to that field.
Further analysis of these Leaders' letters, alongside additional Annual Reports and Accounts data, also reveals a dissonance in the way in which INGOs describe their relationship with local partners in these different communication types. While these Leaders' letters and narrative reports tell stories of collaborative associations with locally-based partners, this obscures the nature of these relationships as competitive and hierarchical.
The thesis draws on the above findings to reflect on the role of INGOs as suggested in the extant literature. This discussion highlights how the various potential INGO fields identified are associated with differing theoretical roles for INGOs. Finally, the thesis considers how INGO role representations continue to contribute to unequal power relations between INGOs and their partners
Interactive Sonic Environments: Sonic artwork via gameplay experience
The purpose of this study is to investigate the use of video-game technology in the design and implementation of interactive sonic centric artworks, the purpose of which is to create and contribute to the discourse and understanding of its effectiveness in electro-acoustic composition highlighting the creative process. Key research questions include: How can the language of electro-acoustic music be placed in a new framework derived from videogame aesthetics and technology? What new creative processes need to be considered when using this medium? Moreover, what aspects of 'play' should be considered when designing the systems? The findings of this study assert that composers and sonic art practitioners need little or no coding knowledge to create exciting applications and the myriad of options available to the composer when using video-game technology is limited only by imagination. Through a cyclic process of planning, building, testing and playing these applications the project revealed advantages and unique sonic opportunities in comparison to other sonic art installations. A portfolio of selected original compositions, both fixed and open are presented by the author to complement this study. The commentary serves to place the work in context with other practitioners in the field and to provide compositional approaches that have been taken
Anytime algorithms for ROBDD symmetry detection and approximation
Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation
- …