44,314 research outputs found

    Fully-Autonomous, Vision-based Traffic Signal Control: from Simulation to Reality

    Get PDF
    Ineffective traffic signal control is one of the major causes of congestion in urban road networks. Dynamically changing traffic conditions and live traffic state estimation are fundamental challenges that limit the ability of the existing signal infrastructure in rendering individualized signal control in real-time. We use deep reinforcement learning (DRL) to address these challenges. Due to economic and safety constraints associated training such agents in the real world, a practical approach is to do so in simulation before deployment. Domain randomisation is an effective technique for bridging the reality gap and ensuring effective transfer of simulation-trained agents to the real world. In this paper, we develop a fully-autonomous, vision-based DRL agent that achieve adaptive signal control in the face of complex, imprecise, and dynamic traffic environments. Our agent uses live visual data (i.e. a stream of real-time RGB footage) from an intersection to extensively perceive and subsequently act upon the traffic environment. Employing domain randomisation, we examine our agent’s generalisation capabilities under varying traffic conditions in both the simulation and the real-world environments. In a diverse validation set independent of training data, our traffic control agent reliably adapted to novel traffic situations and demonstrated a positive transfer to previously unseen real intersections despite being trained entirely in simulation

    Modelling uncertainties for measurements of the H → γγ Channel with the ATLAS Detector at the LHC

    Get PDF
    The Higgs boson to diphoton (H → γγ) branching ratio is only 0.227 %, but this final state has yielded some of the most precise measurements of the particle. As measurements of the Higgs boson become increasingly precise, greater import is placed on the factors that constitute the uncertainty. Reducing the effects of these uncertainties requires an understanding of their causes. The research presented in this thesis aims to illuminate how uncertainties on simulation modelling are determined and proffers novel techniques in deriving them. The upgrade of the FastCaloSim tool is described, used for simulating events in the ATLAS calorimeter at a rate far exceeding the nominal detector simulation, Geant4. The integration of a method that allows the toolbox to emulate the accordion geometry of the liquid argon calorimeters is detailed. This tool allows for the production of larger samples while using significantly fewer computing resources. A measurement of the total Higgs boson production cross-section multiplied by the diphoton branching ratio (σ × Bγγ) is presented, where this value was determined to be (σ × Bγγ)obs = 127 ± 7 (stat.) ± 7 (syst.) fb, within agreement with the Standard Model prediction. The signal and background shape modelling is described, and the contribution of the background modelling uncertainty to the total uncertainty ranges from 18–2.4 %, depending on the Higgs boson production mechanism. A method for estimating the number of events in a Monte Carlo background sample required to model the shape is detailed. It was found that the size of the nominal γγ background events sample required a multiplicative increase by a factor of 3.60 to adequately model the background with a confidence level of 68 %, or a factor of 7.20 for a confidence level of 95 %. Based on this estimate, 0.5 billion additional simulated events were produced, substantially reducing the background modelling uncertainty. A technique is detailed for emulating the effects of Monte Carlo event generator differences using multivariate reweighting. The technique is used to estimate the event generator uncertainty on the signal modelling of tHqb events, improving the reliability of estimating the tHqb production cross-section. Then this multivariate reweighting technique is used to estimate the generator modelling uncertainties on background V γγ samples for the first time. The estimated uncertainties were found to be covered by the currently assumed background modelling uncertainty

    Coloniality and the Courtroom: Understanding Pre-trial Judicial Decision Making in Brazil

    Get PDF
    This thesis focuses on judicial decision making during custody hearings in Rio de Janeiro, Brazil. The impetus for the study is that while national and international protocols mandate the use of pre-trial detention only as a last resort, judges continue to detain people pre-trial in large numbers. Custody hearings were introduced in 2015, but the initiative has not produced the reduction in pre-trial detention that was hoped. This study aims to understand what informs judicial decision making at this stage. The research is approached through a decolonial lens to foreground legacies of colonialism, overlooked in mainstream criminological scholarship. This is an interview-based study, where key court actors (judges, prosecutors, and public defenders) and subject matter specialists were asked about influences on judicial decision making. Interview data is complemented by non-participatory observation of custody hearings. The research responds directly to Aliverti et al.'s (2021) call to ‘decolonize the criminal question’ by exposing and explaining how colonialism informs criminal justice practices. Answering the call in relation to judicial decision making, findings provide evidence that colonial-era assumptions, dynamics, and hierarchies were evident in the practice of custody hearings and continue to inform judges’ decisions, thus demonstrating the coloniality of justice. This study is significant for the new empirical data presented and theoretical innovation is also offered via the introduction of the ‘anticitizen’. The concept builds on Souza’s (2007) ‘subcitizen’ to account for the active pursuit of dangerous Others by judges casting themselves as crime fighters in a modern moral crusade. The findings point to the limited utility of human rights discourse – the normative approach to influencing judicial decision making around pre-trial detention – as a plurality of conceptualisations compete for dominance. This study has important implications for all actors aiming to reduce pre-trial detention in Brazil because unless underpinning colonial logics are addressed, every innovation risks becoming the next lei para inglês ver (law [just] for the English to see)

    Underwater optical wireless communications in turbulent conditions: from simulation to experimentation

    Get PDF
    Underwater optical wireless communication (UOWC) is a technology that aims to apply high speed optical wireless communication (OWC) techniques to the underwater channel. UOWC has the potential to provide high speed links over relatively short distances as part of a hybrid underwater network, along with radio frequency (RF) and underwater acoustic communications (UAC) technologies. However, there are some difficulties involved in developing a reliable UOWC link, namely, the complexity of the channel. The main focus throughout this thesis is to develop a greater understanding of the effects of the UOWC channel, especially underwater turbulence. This understanding is developed from basic theory through to simulation and experimental studies in order to gain a holistic understanding of turbulence in the UOWC channel. This thesis first presents a method of modelling optical underwater turbulence through simulation that allows it to be examined in conjunction with absorption and scattering. In a stationary channel, this turbulence induced scattering is shown to cause and increase both spatial and temporal spreading at the receiver plane. It is also demonstrated using the technique presented that the relative impact of turbulence on a received signal is lower in a highly scattering channel, showing an in-built resilience of these channels. Received intensity distributions are presented confirming that fluctuations in received power from this method follow the commonly used Log-Normal fading model. The impact of turbulence - as measured using this new modelling framework - on link performance, in terms of maximum achievable data rate and bit error rate is equally investigated. Following that, experimental studies comparing both the relative impact of turbulence induced scattering on coherent and non-coherent light propagating through water and the relative impact of turbulence in different water conditions are presented. It is shown that the scintillation index increases with increasing temperature inhomogeneity in the underwater channel. These results indicate that a light beam from a non-coherent source has a greater resilience to temperature inhomogeneity induced turbulence effect in an underwater channel. These results will help researchers in simulating realistic channel conditions when modelling a light emitting diode (LED) based intensity modulation with direct detection (IM/DD) UOWC link. Finally, a comparison of different modulation schemes in still and turbulent water conditions is presented. Using an underwater channel emulator, it is shown that pulse position modulation (PPM) and subcarrier intensity modulation (SIM) have an inherent resilience to turbulence induced fading with SIM achieving higher data rates under all conditions. The signal processing technique termed pair-wise coding (PWC) is applied to SIM in underwater optical wireless communications for the first time. The performance of PWC is compared with the, state-of-the-art, bit and power loading optimisation algorithm. Using PWC, a maximum data rate of 5.2 Gbps is achieved in still water conditions

    Applications of higher-form symmetries at strong and weak coupling

    Get PDF
    In this thesis we consider two distinct applications of higher-form symmetries in quantum field theory. First we explore the spontaneous breaking of higher-form symmetry in a holographic quantum field theory containing matter fields in the fundamental representation of the gauge group U(N). At strong coupling, we numerically solve the bulk equations of motion to compute the current-current Green’s function and demonstrate the existence of a goldstone mode. We then compare to direct analytic perturbative results obtained at weak coupling. In the second half of the thesis we work with a hydrodynamic effective field theory which possesses a higher-form symmetry. In particular, we consider a natural higher-derivative correction to force-free electrodynamics and compute a hydrodynamic transport coefficient from microscopics. Concretely, this is a perturbative QED calculation in a background magnetic field. Finally we compare our findings to astrophysical observations

    Exploring the Structure of Scattering Amplitudes in Quantum Field Theory: Scattering Equations, On-Shell Diagrams and Ambitwistor String Models in Gauge Theory and Gravity

    Get PDF
    In this thesis I analyse the structure of scattering amplitudes in super-symmetric gauge and gravitational theories in four dimensional spacetime, starting with a detailed review of background material accessible to a non-expert. I then analyse the 4D scattering equations, developing the theory of how they can be used to express scattering amplitudes at tree level. I go on to explain how the equations can be solved numerically using a Monte Carlo algorithm, and introduce my Mathematica package treeamps4dJAF which performs these calculations. Next I analyse the relation between the 4D scattering equations and on-shell diagrams in N = 4 super Yang-Mills, which provides a new perspective on the tree level amplitudes of the theory. I apply a similar analysis to N = 8 supergravity, developing the theory of on-shell diagrams to derive new Grassmannian integral formulae for the amplitudes of the theory. In both theories I derive a new worldsheet expression for the 4 point one loop amplitude supported on 4D scattering equations. Finally I use 4D ambitwistor string theory to analyse scattering amplitudes in N = 4 conformal supergravity, deriving new worldsheet formulae for both plane wave and non-plane wave amplitudes supported on 4D scattering equations. I introduce a new prescription to calculate the derivatives of on-shell variables with respect to momenta, and I use this to show that certain non-plane wave amplitudes can be calculated as momentum derivatives of amplitudes with plane wave states

    Predicting limit-setting behavior of gamblers using machine learning algorithms: a real-world study of Norwegian gamblers using account data

    Get PDF
    Player protection and harm minimization have become increasingly important in the gambling industry along with the promotion of responsible gambling (RG). Among the most widespread RG tools that gaming operators provide are limit-setting tools that help players limit the amount of time and/or money they spend gambling. Research suggests that limit-setting significantly reduces the amount of money that players spend. If limit-setting is to be encouraged as a way of facilitating responsible gambling, it is important to know what variables are important in getting individuals to set and change limits in the first place. In the present study, 33 variables assessing the player behavior among Norsk Tipping clientele (N = 70,789) from January to March 2017 were computed. The 33 variables which reflect the players’ behavior were then used to predict the likelihood of gamblers changing their monetary limit between April and June 2017. The 70,789 players were randomly split into a training dataset of 56,532 and an evaluation set of 14,157 players (corresponding to an 80/20 split). The results demonstrated that it is possible to predict future limit-setting based on player behavior. The random forest algorithm appeared to predict limit-changing behavior much better than the other algorithms. However, on the independent test data, the random forest algorithm’s accuracy dropped significantly. The best performance on the test data along with a small decrease in accuracy in comparison to the training data was delivered by the gradient boost machine learning algorithm. The most important variables predicting future limit-setting using the gradient boost machine algorithm were players receiving feedback that they had reached 80% of their personal monthly global loss limit, personal monthly loss limit, the amount bet, theoretical loss, and whether the players had increased their limits in the past. With the help of predictive analytics, players with a high likelihood of changing their limits can be proactively approached

    Methods for the analysis of oscillatory integrals and Bochner-Riesz operators

    Get PDF
    For a smooth surface Γ of arbitrary codimension, one can consider the Lp mapping properties of the Bochner-Riesz multiplier m(ζ) = dist(ζ,Γ)^α φ(ζ), where α > 0 and φ is an appropriate smooth cutoff function. Even for the sphere, the exact Lp boundedness range remains a central open problem in Euclidean harmonic analysis. We consider the Lp integrability of the Bochner-Riesz convolution kernel for a particular class of surfaces (of any codimension). For a subclass of these surfaces the range of Lp integrability of the kernels differs substantially from the Lp boundedness range of the corresponding Bochner-Riesz multiplier operator. Extending work of Mockenhoupt, we then establish a range of operator bounds, which are sharp in the α exponent, under the assumption of an appropriate L2 restriction estimate. Hickman and Wright established sharp oscillatory integral estimates, associated with a particular class of surfaces, and derived restriction estimates. We extend this work to certain curves of standard type and corresponding surfaces of revolution. These surfaces are discussed as an explicit class for which we have Lp → Lp boundedness of the corresponding Bochner-Riesz operators. Understanding the structure of the roots of real polynomials is important in obtaining stable bounds for oscillatory integrals with polynomial phases. For real polynomials with exponents in some fixed set, Ψ(t)=x+y1 t^{k1} +...+yL t^{kL}, we analyse the different possible root structures that can occur as the coefficients vary. We first establish a stratification of roots into tiers containing roots of comparable sizes. We then show that at most L non-zero roots can cluster about a point. Supposing additional restrictions on the coefficients, we derive structural refinements. These structural results extend work of Kowalski and Wright and provide a characteristic picture of root structure at coarse scales. As an application, these results are used to recover the sharp oscillatory integral estimates of Hickman and Wright, using bounds for oscillatory integrals of Phong and Stein

    Studies of strategic performance management for classical organizations theory & practice

    Get PDF
    Nowadays, the activities of "Performance Management" have spread very broadly in actually every part of business and management. There are numerous practitioners and researchers from very different disciplines, who are involved in exploring the different contents of performance management. In this thesis, some relevant historic developments in performance management are first reviewed. This includes various theories and frameworks of performance management. Then several management science techniques are developed for assessing performance management, including new methods in Data Envelopment Analysis (DEA) and Soft System Methodology (SSM). A theoretical framework for performance management and its practical procedures (five phases) are developed for "classic" organizations using soft system thinking, and the relationship with the existing theories are explored. Eventually these results are applied in three case studies to verify our theoretical development. One of the main contributions of this work is to point out, and to systematically explore the basic idea that the effective forms and structures of performance management for an organization are likely to depend greatly on the organizational configuration, in order to coordinate well with other management activities in the organization, which has seemingly been neglected in the existing literature of performance management research in the sense that there exists little known research that associated particular forms of performance management with the explicit assumptions of organizational configuration. By applying SSM, this thesis logically derives some main functional blocks of performance management in 'classic' organizations and clarifies the relationships between performance management and other management activities. Furthermore, it develops some new tools and procedures, which can hierarchically decompose organizational strategies and produce a practical model of specific implementation steps for "classic" organizations. Our approach integrates popular types of performance management models. Last but not least, this thesis presents findings from three major cases, which are quite different organizations in terms of management styles, ownership, and operating environment, to illustrate the fliexbility of the developed theoretical framework

    Anytime algorithms for ROBDD symmetry detection and approximation

    Get PDF
    Reduced Ordered Binary Decision Diagrams (ROBDDs) provide a dense and memory efficient representation of Boolean functions. When ROBDDs are applied in logic synthesis, the problem arises of detecting both classical and generalised symmetries. State-of-the-art in symmetry detection is represented by Mishchenko's algorithm. Mishchenko showed how to detect symmetries in ROBDDs without the need for checking equivalence of all co-factor pairs. This work resulted in a practical algorithm for detecting all classical symmetries in an ROBDD in O(|G|³) set operations where |G| is the number of nodes in the ROBDD. Mishchenko and his colleagues subsequently extended the algorithm to find generalised symmetries. The extended algorithm retains the same asymptotic complexity for each type of generalised symmetry. Both the classical and generalised symmetry detection algorithms are monolithic in the sense that they only return a meaningful answer when they are left to run to completion. In this thesis we present efficient anytime algorithms for detecting both classical and generalised symmetries, that output pairs of symmetric variables until a prescribed time bound is exceeded. These anytime algorithms are complete in that given sufficient time they are guaranteed to find all symmetric pairs. Theoretically these algorithms reside in O(n³+n|G|+|G|³) and O(n³+n²|G|+|G|³) respectively, where n is the number of variables, so that in practice the advantage of anytime generality is not gained at the expense of efficiency. In fact, the anytime approach requires only very modest data structure support and offers unique opportunities for optimisation so the resulting algorithms are very efficient. The thesis continues by considering another class of anytime algorithms for ROBDDs that is motivated by the dearth of work on approximating ROBDDs. The need for approximation arises because many ROBDD operations result in an ROBDD whose size is quadratic in the size of the inputs. Furthermore, if ROBDDs are used in abstract interpretation, the running time of the analysis is related not only to the complexity of the individual ROBDD operations but also the number of operations applied. The number of operations is, in turn, constrained by the number of times a Boolean function can be weakened before stability is achieved. This thesis proposes a widening that can be used to both constrain the size of an ROBDD and also ensure that the number of times that it is weakened is bounded by some given constant. The widening can be used to either systematically approximate an ROBDD from above (i.e. derive a weaker function) or below (i.e. infer a stronger function). The thesis also considers how randomised techniques may be deployed to improve the speed of computing an approximation by avoiding potentially expensive ROBDD manipulation
    corecore