27,051 research outputs found
Search for heavy Majorana or Dirac neutrinos and right-handed gauge bosons in final states with charged leptons and jets in collisions at TeV with the ATLAS detector
A search for heavy right-handed Majorana or Dirac neutrinos
and heavy right-handed gauge bosons is performed in events
with energetic electrons or muons, with the same or opposite electric charge,
and energetic jets. The search is carried out separately for topologies of
clearly separated final-state products (``resolved'' channel) and topologies
with boosted final states with hadronic products partially overlapping and
reconstructed as a large-radius jet (``boosted'' channel). The events are
selected from collision data at the LHC with an integrated luminosity of
139 fb collected by the ATLAS detector at = 13 TeV. No
significant deviations from the Standard Model predictions are observed. The
results are interpreted within the theoretical framework of a left-right
symmetric model, and lower limits are set on masses in the heavy right-handed
boson and plane. The excluded region extends
to about TeV for both Majorana and Dirac
neutrinos at TeV. with
masses of less than 3.5 (3.6) TeV are excluded in the electron (muon) channel
at TeV for the Majorana neutrinos, and limits of
up to 3.6 TeV for () TeV in
the electron (muon) channel are set for the Dirac neutrinos.Comment: 48 pages in total, author list starting page 31, 9 figures, 5 tables,
submitted to EPJC. All figures including auxiliary figures are available at
https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/EXOT-2019-39
Concatenated Forward Error Correction with KP4 and Single Parity Check Codes
Concatenated forward error correction is studied using an outer KP4
Reed-Solomon code with hard-decision decoding and inner single parity check
(SPC) codes with Chase/Wagner soft-decision decoding. Analytical expressions
are derived for the end-to-end frame and bit error rates for transmission over
additive white Gaussian noise channels with binary phase-shift keying (BPSK)
and quaternary amplitude shift keying (4-ASK), as well as with symbol
interleavers and quantized channel outputs. The BPSK error rates are compared
to those of two other inner codes: a two-dimensional product code with SPC
component codes and an extended Hamming code. Simulation results for
unit-memory inter-symbol interference channels and 4-ASK are also presented.
The results show that the coding schemes achieve similar error rates, but SPC
codes have the lowest complexity and permit flexible rate adaptation.Comment: Accepted for publication in IEEE/OSA Journal of Lightwave Technolog
Implicit high-order gas-kinetic schemes for compressible flows on three-dimensional unstructured meshes
In the previous studies, the high-order gas-kinetic schemes (HGKS) have
achieved successes for unsteady flows on three-dimensional unstructured meshes.
In this paper, to accelerate the rate of convergence for steady flows, the
implicit non-compact and compact HGKSs are developed. For non-compact scheme,
the simple weighted essentially non-oscillatory (WENO) reconstruction is used
to achieve the spatial accuracy, where the stencils for reconstruction contain
two levels of neighboring cells. Incorporate with the nonlinear generalized
minimal residual (GMRES) method, the implicit non-compact HGKS is developed. In
order to improve the resolution and parallelism of non-compact HGKS, the
implicit compact HGKS is developed with Hermite WENO (HWENO) reconstruction, in
which the reconstruction stencils only contain one level of neighboring cells.
The cell averaged conservative variable is also updated with GMRES method.
Simultaneously, a simple strategy is used to update the cell averaged gradient
by the time evolution of spatial-temporal coupled gas distribution function. To
accelerate the computation, the implicit non-compact and compact HGKSs are
implemented with the graphics processing unit (GPU) using compute unified
device architecture (CUDA). A variety of numerical examples, from the subsonic
to supersonic flows, are presented to validate the accuracy, robustness and
efficiency of both inviscid and viscous flows.Comment: arXiv admin note: text overlap with arXiv:2203.0904
Recommended from our members
Ensuring Access to Safe and Nutritious Food for All Through the Transformation of Food Systems
Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC
The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this
final state has yielded some of the most precise measurements of the particle. As
measurements of the Higgs boson become increasingly precise, greater import is
placed on the factors that constitute the uncertainty. Reducing the effects of these
uncertainties requires an understanding of their causes. The research presented
in this thesis aims to illuminate how uncertainties on simulation modelling are
determined and proffers novel techniques in deriving them.
The upgrade of the FastCaloSim tool is described, used for simulating events in
the ATLAS calorimeter at a rate far exceeding the nominal detector simulation,
Geant4. The integration of a method that allows the toolbox to emulate the
accordion geometry of the liquid argon calorimeters is detailed. This tool allows
for the production of larger samples while using significantly fewer computing
resources.
A measurement of the total Higgs boson production cross-section multiplied
by the diphoton branching ratio (σ × Bγγ) is presented, where this value was
determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement
with the Standard Model prediction. The signal and background shape modelling
is described, and the contribution of the background modelling uncertainty to the
total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production
mechanism.
A method for estimating the number of events in a Monte Carlo background
sample required to model the shape is detailed. It was found that the size of
the nominal γγ background events sample required a multiplicative increase by
a factor of 3.60 to adequately model the background with a confidence level of
68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate,
0.5 billion additional simulated events were produced, substantially reducing the
background modelling uncertainty.
A technique is detailed for emulating the effects of Monte Carlo event generator
differences using multivariate reweighting. The technique is used to estimate the
event generator uncertainty on the signal modelling of tHqb events, improving the
reliability of estimating the tHqb production cross-section. Then this multivariate
reweighting technique is used to estimate the generator modelling uncertainties
on background V γγ samples for the first time. The estimated uncertainties were
found to be covered by the currently assumed background modelling uncertainty
Coloniality and the Courtroom: Understanding Pre-trial Judicial Decision Making in Brazil
This thesis focuses on judicial decision making during custody hearings in Rio de Janeiro, Brazil. The impetus for the study is that while national and international protocols mandate the use of pre-trial detention only as a last resort, judges continue to detain people pre-trial in large numbers. Custody hearings were introduced in 2015, but the initiative has not produced the reduction in pre-trial detention that was hoped. This study aims to understand what informs judicial decision making at this stage. The research is approached through a decolonial lens to foreground legacies of colonialism, overlooked in mainstream criminological scholarship. This is an interview-based study, where key court actors (judges, prosecutors, and public defenders) and subject matter specialists were asked about influences on judicial decision making. Interview data is complemented by non-participatory observation of custody hearings. The research responds directly to Aliverti et al.'s (2021) call to ‘decolonize the criminal question’ by exposing and explaining how colonialism informs criminal justice practices. Answering the call in relation to judicial decision making, findings provide evidence that colonial-era assumptions, dynamics, and hierarchies were evident in the practice of custody hearings and continue to inform judges’ decisions, thus demonstrating the coloniality of justice. This study is significant for the new empirical data presented and theoretical innovation is also offered via the introduction of the ‘anticitizen’. The concept builds on Souza’s (2007) ‘subcitizen’ to account for the active pursuit of dangerous Others by judges casting themselves as crime fighters in a modern moral crusade. The findings point to the limited utility of human rights discourse – the normative approach to influencing judicial decision making around pre-trial detention – as a plurality of conceptualisations compete for dominance. This study has important implications for all actors aiming to reduce pre-trial detention in Brazil because unless underpinning colonial logics are addressed, every innovation risks becoming the next lei para inglês ver (law [just] for the English to see)
Image classification over unknown and anomalous domains
A longstanding goal in computer vision research is to develop methods that are simultaneously applicable to a broad range of prediction problems. In contrast to this, models often perform best when they are specialized to some task or data type. This thesis investigates the challenges of learning models that generalize well over multiple unknown or anomalous modes and domains in data, and presents new solutions for learning robustly in this setting.
Initial investigations focus on normalization for distributions that contain multiple sources (e.g. images in different styles like cartoons or photos). Experiments demonstrate the extent to which existing modules, batch normalization in particular, struggle with such heterogeneous data, and a new solution is proposed that can better handle data from multiple visual modes, using differing sample statistics for each.
While ideas to counter the overspecialization of models have been formulated in sub-disciplines of transfer learning, e.g. multi-domain and multi-task learning, these usually rely on the existence of meta information, such as task or domain labels. Relaxing this assumption gives rise to a new transfer learning setting, called latent domain learning in this thesis, in which training and inference are carried out over data from multiple visual domains, without domain-level annotations. Customized solutions are required for this, as the performance of standard models degrades: a new data augmentation technique that interpolates between latent domains in an unsupervised way is presented, alongside a dedicated module that sparsely accounts for hidden domains in data, without requiring domain labels to do so.
In addition, the thesis studies the problem of classifying previously unseen or anomalous modes in data, a fundamental problem in one-class learning, and anomaly detection in particular. While recent ideas have been focused on developing self-supervised solutions for the one-class setting, in this thesis new methods based on transfer learning are formulated. Extensive experimental evidence demonstrates that a transfer-based perspective benefits new problems that have recently been proposed in anomaly detection literature, in particular challenging semantic detection tasks
Enhancing Parkinson’s Disease Prediction Using Machine Learning and Feature Selection Methods
Several millions of people suffer from Parkinson’s disease globally. Parkinson’s affects about 1% of people over 60 and its symptoms increase with age. The voice may be affected and patients experience abnormalities in speech that might not be noticed by listeners, but which could be analyzed using recorded speech signals. With the huge advancements of technology, the medical data has increased dramatically, and therefore, there is a need to apply data mining and machine learning methods to extract new knowledge from this data. Several classification methods were used to analyze medical data sets and diagnostic problems, such as Parkinson’s Disease (PD). In addition, to improve the performance of classification, feature selection methods have been extensively used in many fields. This paper aims to propose a comprehensive approach to enhance the prediction of PD using several machine learning methods with different feature selection methods such as filter-based and wrapper-based. The dataset includes 240 recodes with 46 acoustic features extracted from 3 voice recording replications for 80 patients. The experimental results showed improvements when wrapper-based features selection method was used with KNN classifier with accuracy of 88.33%. The best obtained results were compared with other studies and it was found that this study provides comparable and superior results
How to Be a God
When it comes to questions concerning the nature of Reality, Philosophers and Theologians have the answers.
Philosophers have the answers that can’t be proven right. Theologians have the answers that can’t be proven wrong.
Today’s designers of Massively-Multiplayer Online Role-Playing Games create realities for a living. They can’t spend centuries mulling over the issues: they have to face them head-on. Their practical experiences can indicate which theoretical proposals actually work in practice.
That’s today’s designers. Tomorrow’s will have a whole new set of questions to answer.
The designers of virtual worlds are the literal gods of those realities. Suppose Artificial Intelligence comes through and allows us to create non-player characters as smart as us. What are our responsibilities as gods? How should we, as gods, conduct ourselves?
How should we be gods
Recommended from our members
Reliable Decision-Making with Imprecise Models
The rapid growth in the deployment of autonomous systems across various sectors has generated considerable interest in how these systems can operate reliably in large, stochastic, and unstructured environments. Despite recent advances in artificial intelligence and machine learning, it is challenging to assure that autonomous systems will operate reliably in the open world. One of the causes of unreliable behavior is the impreciseness of the model used for decision-making. Due to the practical challenges in data collection and precise model specification, autonomous systems often operate based on models that do not represent all the details in the environment. Even if the system has access to a comprehensive decision-making model that accounts for all the details in the environment and all possible scenarios the agent may encounter, it may be intractable to solve this complex model optimally. Consequently, this complex, high fidelity model may be simplified to accelerate planning, introducing imprecision. Reasoning with such imprecise models affects the reliability of autonomous systems. A system\u27s actions may sometimes produce unexpected, undesirable consequences, which are often identified after deployment. How can we design autonomous systems that can operate reliably in the presence of uncertainty and model imprecision?
This dissertation presents solutions to address three classes of model imprecision in a Markov decision process, along with an analysis of the conditions under which bounded-performance can be guaranteed. First, an adaptive outcome selection approach is introduced to devise risk-aware reduced models of the environment that efficiently balance the trade-off between model simplicity and fidelity, to accelerate planning in resource-constrained settings. Second, a framework that extends stochastic shortest path framework to problems with imperfect information about the goal state during planning is introduced, along with two solution approaches to solve this problem. Finally, two complementary solution approaches are presented to minimize the negative side effects of agent actions. The techniques presented in this dissertation enable an autonomous system to detect and mitigate undesirable behavior, without redesigning the model entirely
- …