8,602 research outputs found
Model Diagnostics meets Forecast Evaluation: Goodness-of-Fit, Calibration, and Related Topics
Principled forecast evaluation and model diagnostics are vital in fitting probabilistic models and forecasting outcomes of interest. A common principle is that fitted or predicted distributions ought to be calibrated, ideally in the sense that the outcome is indistinguishable from a random draw from the posited distribution. Much of this thesis is centered on calibration properties of various types of forecasts.
In the first part of the thesis, a simple algorithm for exact multinomial goodness-of-fit tests is proposed. The algorithm computes exact -values based on various test statistics, such as the log-likelihood ratio and Pearson\u27s chi-square. A thorough analysis shows improvement on extant methods. However, the runtime of the algorithm grows exponentially in the number of categories and hence its use is limited.
In the second part, a framework rooted in probability theory is developed, which gives rise to hierarchies of calibration, and applies to both predictive distributions and stand-alone point forecasts. Based on a general notion of conditional T-calibration, the thesis introduces population versions of T-reliability diagrams and revisits a score decomposition into measures of miscalibration, discrimination, and uncertainty. Stable and efficient estimators of T-reliability diagrams and score components arise via nonparametric isotonic regression and the pool-adjacent-violators algorithm. For in-sample model diagnostics, a universal coefficient of determination is introduced that nests and reinterprets the classical in least squares regression.
In the third part, probabilistic top lists are proposed as a novel type of prediction in classification, which bridges the gap between single-class predictions and predictive distributions. The probabilistic top list functional is elicited by strictly consistent evaluation metrics, based on symmetric proper scoring rules, which admit comparison of various types of predictions
Simulation and Optimization of Scheduling Policies in Dynamic Stochastic Resource-Constrained Multi-Project Environments
The goal of the Project Management is to organise project schedules to complete projects before their completion dates, specified in their contract. When a project is beyond its completion date, organisations may lose the rewards from project completion as well as their organisational prestige. Project Management involves many uncertain factors such as unknown new project arrival dates and unreliable task duration predictions, which may affect project schedules that lead to delivery overruns. Successful Project Management could be done by considering these uncertainties. In this PhD study, we aim to create a more comprehensive model which considers a system where projects (of multiple types) arrive at random to the resource-constrained environment for which rewards for project delivery are impacted by fees for late project completion and tasks may complete sooner or later than expected task duration. In this thesis, we considered two extensions of the resource-constrained multi-project scheduling problem (RCMPSP) in dynamic environments. RCMPSP requires scheduling tasks of multiple projects simultaneously using a pool of limited renewable resources, and its goal usually is the shortest make-span or the highest profit. The first extension of RCMPSP is the dynamic resource-constrained multi-project scheduling problem. Dynamic in this problem refers that new projects arrive randomly during the ongoing project execution, which disturbs the existing project scheduling plan. The second extension of RCMPSP is the dynamic and stochastic resource-constrained multi-project scheduling problem. Dynamic and stochastic represent that both random new projects arrivals and stochastic task durations. In these problems, we assumed that projects generate rewards at their completion; completions later than a due date cause tardiness costs, and we seek to maximise average profits per unit time or the expected discounted long-run profit. We model these problems as infinite-horizon discrete-time Markov decision processes
Networks: A study in Analysis and Design
In this dissertation, we will look at two fundamental aspects of Networks: Network Analysis and Network Design. In part A, we look at Network Analysis area of the dissertation which involves finding the densest subgraph in each graph. The densest subgraph extraction problem is fundamentally a non-linear optimization problem. Nevertheless, it can be solved in polynomial time by an exact algorithm based on the iterative solution of a series of max-flow sub-problems. To approach graphs with millions of vertices and edges, one must resort to heuristic algorithms. We provide an efficient implementation of a greedy heuristic from the literature that is extremely fast and has some nice theoretical properties. An extensive computational analysis shows that the proposed heuristic algorithm proved very effective on many test instances, often providing either the optimal solution or near-optimal solution within short computing times. In part-B, we discuss Network design, which is a cornerstone of mathematical optimization, is about defining the main characteristics of a network satisfying requirements on connectivity, capacity, and level-of-service. In multi-commodity network design, one is required to design a network minimizing the installation cost of its arcs and the operational cost to serve a set of point-to-point connections. This prototypical problem was recently enriched by additional constraints imposing that each origin-destination of a connection is served by a single path satisfying one or more level-of-service requirements, thus defining the Network Design with Service Requirements. These constraints are crucial, e.g., in telecommunications and computer networks, in order to ensure reliable and low-latency communication. We provide a new formulation for the problem, where variables are associated with paths satisfying the end-to-end service requirements. A fast algorithm for enumerating all the exponentially-many feasible paths and, when this is not viable, a column generation scheme that is embedded into a branch-and-cut-and-price algorithm is provided
Search for third generation vector-like leptons with the ATLAS detector
The Standard Model of particle physics provides a concise description of the building blocks of our universe in terms of fundamental particles and their interactions. It is an extremely successful theory, providing a plethora of predictions that precisely match experimental observation. In 2012, the Higgs boson was observed at CERN and was the last particle predicted by the Standard Model that had yet-to-be discovered. While this added further credibility to the theory, the Standard Model appears incomplete. Notably, it only accounts for 5% of the energy density of the universe (the rest being ``dark matter'' and ``dark energy''), it cannot resolve the gravitational force with quantum theory, it does not explain the origin of neutrino masses and cannot account for matter/anti-matter asymmetry. The most plausible explanation is that the theory is an approximation and new physics remains.
Vector-like leptons are well-motivated by a number of theories that seek to provide closure on the Standard Model. They are a simple addition to the Standard Model and can help to resolve a number of discrepancies without disturbing precisely measured observables. This thesis presents a search for vector-like leptons that preferentially couple to tau leptons. The search was performed using proton-proton collision data from the Large Hadron Collider collected by the ATLAS experiment from 2015 to 2018 at center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 139 inverse femtobarns. Final states of various lepton multiplicities were considered to isolate the vector-like lepton signal against Standard Model and instrumental background. The major backgrounds mimicking the signal are from WZ, ZZ, tt+Z production and from mis-identified leptons. A number of boosted decision trees were used to improve rejection power against background where the signal was measured using a binned-likelihood estimator. No excess relative to the Standard Model was observed. Exclusion limits were placed on vector-like leptons in the mass range of 130 to 898 GeV
Consolidation of Urban Freight Transport – Models and Algorithms
Urban freight transport is an indispensable component of economic and social life in cities. Compared to other types of transport, however, it contributes disproportionately to the negative impacts of traffic. As a result, urban freight transport is closely linked to social, environmental, and economic challenges. Managing urban freight transport and addressing these issues poses challenges not only for local city administrations but also for companies, such as logistics service providers (LSPs). Numerous policy measures and company-driven initiatives exist in the area of urban freight transport to overcome these challenges. One central approach is the consolidation of urban freight transport. This dissertation focuses on urban consolidation centers (UCCs) which are a widely studied and applied measure in urban freight transport. The fundamental idea of UCCs is to consolidate freight transport across companies in logistics facilities close to an urban area in order to increase the efficiency of vehicles delivering goods within the urban area. Although the concept has been researched and tested for several decades and it was shown that it can reduce the negative externalities of freight transport in cities, in practice many UCCs struggle with a lack of business participation and financial difficulties. This dissertation is primarily focused on the costs and savings associated with the use of UCCs from the perspective of LSPs. The cost-effectiveness of UCC use, which is also referred to as cost attractiveness, can be seen as a crucial condition for LSPs to be interested in using UCC systems. The overall objective of this dissertation is two-fold. First, it aims to develop models to provide decision support for evaluating the cost-effectiveness of using UCCs. Second, it aims to analyze the impacts of urban freight transport regulations and operational characteristics on the cost attractiveness of using UCCs from the perspective of LSPs. In this context, a distinction is made between UCCs that are jointly operated by a group of LSPs and UCCs that are operated by third parties who offer their urban transport service for a fee. The main body of this dissertation is based on three research papers. The first paper focuses on jointly-operated UCCs that are operated by a group of cooperating LSPs. It presents a simulation model to analyze the financial impacts on LSPs participating in such a scheme. In doing so, a particular focus is placed on urban freight transport regulations. A case study is used to analyze the operation of a jointly-operated UCC for scenarios involving three freight transport regulations. The second and third papers take on a different perspective on UCCs by focusing on third-party operated UCCs. In contrast to the first paper, the second and third papers present an evaluation approach in which the decision to use UCCs is integrated with the vehicle route planning of LSPs. In addition to addressing the basic version of this integrated routing problem, known as the vehicle routing problem with transshipment facilities (VRPTF), the second paper presents problem extensions that incorporate time windows, fleet size and mix decisions, and refined objective functions. To heuristically solve the basic problem and the new problem variants, an adaptive large neighborhood search (ALNS) heuristic with embedded local search heuristic and set partitioning problem (SPP) is presented. Furthermore, various factors influencing the cost attractiveness of UCCs, including time windows and usage fees, are analyzed using a real-world case study. The third paper extends the work of the second paper and incorporates daily and entrance-based city toll schemes and enables multi-trip routing. A mixed-integer linear programming (MILP) formulation of the resulting problem is proposed, as well as an ALNS solution heuristic. Moreover, a real-world case study with three European cities is used to analyze the impact of the two city toll systems in different operational contexts
DEEP REINFORCEMENT LEARNING AND MODEL PREDICTIVE CONTROL APPROACHES FOR THE SCHEDULED OPERATION OF DOMESTIC REFRIGERATORS
Excess capacity of the UK’s national grid is widely quoted to be reducing to around 4% over the coming years as a consequence of increased economic growth (and hence power usage) and reductions in power generation plants. There is concern that short term variations in power demand could lead to serious wide-scale disruption on a national scale. This is therefore spawning greater attention on augmenting traditional generation plants with renewable and localized energy storage technologies, and consideration of improved demand side responses (DSR), where power consumers are incentivized to switch off assets when the grid is under pressure. It is estimated, for instance, that refrigeration/HVAC systems alone could account for ~14% of the total UK energy usage, with refrigeration and water heating/cooling systems, in particular, being able to act as real-time ‘buffer’ technologies that can be demand-managed to accommodate transient demands by being switched-off for short periods without damaging their outputs. Large populations of thermostatically controlled loads (TCLs) hold significant potential for performing ancillary services in power systems since they are well-established and widely distributed around the power network. In the domestic sector, refrigerators and freezers collectively constitute a very large electrical load since they are continuously connected and are present in almost most households. The rapid proliferation of the ‘Internet of Things’ (IoT) now affords the opportunity to monitor and visualise smart buildings appliances performance and specifically, schedule the operation of the widely distributed domestic refrigerator and freezers to collectively improve energy efficiency and reduce peak power consumption on the electrical grid. To accomplish this, this research proposes the real-time estimation of the thermal mass of individual refrigerators in a network using on-line parameter identification, and the co-ordinated (ON-OFF) scheduling of the refrigerator compressors to maintain their respective temperatures within specified hysteresis bands—commensurate with accommodating food safety standards. Custom Model Predictive Control (MPC) schemes and a Machine Learning algorithm (Reinforcement Learning) are researched to realize an appropriate scheduling methodology which is implemented through COTS IoT hardware. Benefits afforded by the proposed schemes are investigated through experimental trials which show that the co-ordinated operation of domestic refrigerators can 1) reduce the peak power consumption as seen from the perspective of the electrical power grid (i.e. peak power shaving), 2) can adaptively control the temperature hysteresis band of individual refrigerators to increase operational efficiency, and 3) contribute to a widely distributed aggregated load shed for Demand Side Response purposes in order to aid grid stability. Comparative studies of measurements from experimental trials show that the co-ordinated scheduling of refrigerators allows energy savings of between 19% and 29% compared to their traditional isolated (non-co-operative) operation. Moreover, by adaptively changing the hysteresis bands of individual fridges in response to changes in thermal behaviour, a further 20% of savings in energy are possible at local refrigerator level, thereby providing benefits to both network supplier and individual consumer
Multi-parametric Analysis for Mixed Integer Linear Programming: An Application to Transmission Planning and Congestion Control
Enhancing existing transmission lines is a useful tool to combat transmission
congestion and guarantee transmission security with increasing demand and
boosting the renewable energy source. This study concerns the selection of
lines whose capacity should be expanded and by how much from the perspective of
independent system operator (ISO) to minimize the system cost with the
consideration of transmission line constraints and electricity generation and
demand balance conditions, and incorporating ramp-up and startup ramp rates,
shutdown ramp rates, ramp-down rate limits and minimum up and minimum down
times. For that purpose, we develop the ISO unit commitment and economic
dispatch model and show it as a right-hand side uncertainty multiple parametric
analysis for the mixed integer linear programming (MILP) problem. We first
relax the binary variable to continuous variables and employ the Lagrange
method and Karush-Kuhn-Tucker conditions to obtain optimal solutions (optimal
decision variables and objective function) and critical regions associated with
active and inactive constraints. Further, we extend the traditional branch and
bound method for the large-scale MILP problem by determining the upper bound of
the problem at each node, then comparing the difference between the upper and
lower bounds and reaching the approximate optimal solution within the decision
makers' tolerated error range. In additional, the objective function's first
derivative on the parameters of each line is used to inform the selection of
lines to ease congestion and maximize social welfare. Finally, the amount of
capacity upgrade will be chosen by balancing the cost-reduction rate of the
objective function on parameters and the cost of the line upgrade. Our findings
are supported by numerical simulation and provide transmission line planners
with decision-making guidance
Mixed Criticality Systems - A Review : (13th Edition, February 2022)
This review covers research on the topic of mixed criticality systems that has been published since Vestal’s 2007 paper. It covers the period up to end of 2021. The review is organised into the following topics: introduction and motivation, models, single processor analysis (including job-based, hard and soft tasks, fixed priority and EDF scheduling, shared resources and static and synchronous scheduling), multiprocessor analysis, related topics, realistic models, formal treatments, systems issues, industrial practice and research beyond mixed-criticality. A list of PhDs awarded for research relating to mixed-criticality systems is also included
- …