41 research outputs found

    The effect of workload dependence in systems: Experimental evaluation, analytic models, and policy development

    Get PDF
    This dissertation presents an analysis of performance effects of burstiness (formalized by the autocorrelation function) in multi-tiered systems via a 3-pronged approach, i.e., experimental measurements, analytic models, and policy development. This analysis considers (a) systems with finite buffers (e.g., systems with admission control that effectively operate as closed systems) and (b) systems with infinite buffers (i.e., systems that operate as open systems).;For multi-tiered systems with a finite buffer size, experimental measurements show that if autocorrelation exists in any of the tiers in a multi-tiered system, then autocorrelation propagates to all tiers of the system. The presence of autocorrelated flows in all tiers significantly degrades performance. Workload characterization in a real experimental environment driven by the TPC-W benchmark confirms the existence of autocorrelated flows, which originate from the autocorrelated service process of one of the tiers. A simple model is devised that captures the observed behavior. The model is in excellent agreement with experimental measurements and captures the propagation of autocorrelation in the multi-tiered system as well as the resulting performance trends.;For systems with an infinite buffer size, this study focuses on analytic models by proposing and comparing two families of approximations for the departure process of a BMAP/MAP/1 queue that admits batch correlated flows, and whose service time process may be autocorrelated. One approximation is based on the ETAQA methodology for the solution of M/G/1-type processes and the other arises from lumpability rules. Formal proofs are provided: both approximations preserve the marginal distribution of the inter-departure times and their initial correlation structures.;This dissertation also demonstrates how the knowledge of autocorrelation can be used to effectively improve system performance, D_EQAL, a new load balancing policy for clusters with dependent arrivals is proposed. D_EQAL separates jobs to servers according to their sizes as traditional load balancing policies do, but this separation is biased by the effort to reduce performance loss due to autocorrelation in the streams of jobs that are directed to each server. as a result of this, not all servers are equally utilized (i.e., the load in the system becomes unbalanced) but performance benefits of this load unbalancing are significant

    Empirical Studies in Hospital Emergency Departments

    Get PDF
    This dissertation focuses on the operational impacts of crowding in hospital emergency departments. The body of this work is comprised of three essays. In the first essay, Waiting Patiently: An Empirical Study of Queue Abandonment in an Emergency Department, we study queue abandonment, or left without being seen. We show that abandonment is not only influenced by wait time, but also by the queue length and the observable queue flows during the waiting exposure. We show that patients are sensitive to being jumped in the line and that patients respond differently to people more sick and less sick moving through the system. This study shows that managers have an opportunity to impact abandonment behavior by altering what information is available to waiting customers. In the second essay, Doctors Under Load: An Empirical Study of State-Dependent Service Times in Emergency Care, we show that when crowded, multiple mechanisms in the emergency department act to retard patient treatment, but care providers adjust their clinical behavior to accelerate the service. We identify two mechanisms that providers use to accelerate the system: early task initiation and task reduction. In contrast to other recent works, we find the net effect of these countervailing forces to be an increase in service time when the system is crowded. Further, we use simulation to show that ignoring state-dependent service times leads to modeling errors that could cause hospitals to overinvest in human and physical resources. In the final essay, The Financial Consequences of Lost Demand and Reducing Boarding in Hospital Emergency Departments, we use discrete event simulation to estimate the number of patients lost to Left Without Being Seen and ambulance diversion as a result of patients waiting in the emergency department for an inpatient bed (known as boarding). These lost patients represent both a failure of the emergency department to meet the needs of those seeking care and lost revenue for the hospital. We show that dynamic bed management policies that proactively cancel some non-emergency patients when the hospital is near capacity can lead to reduced boarding, increased number of patients served, and increased hospital revenue

    Deep Reinforcement Learning Models for Real-Time Traffic Signal Optimization with Big Traffic Data

    Get PDF
    One of the most significant changes that the globe has faced in recent years is the changes brought about by the COVID19 pandemic. While this research was started before the pandemic began, the pandemic has exposed the value that data and information can have in modern society. During the pandemic traffic volumes changed substantially, leaving the inefficiencies of existing methods exposed. This research has focussed on exploring two key ideas that will become increasingly relevant as societies adapt to these changes: Big Data and Artificial Intelligence. For many municipalities, traffic signals are still re-timed using traditional approaches and there is still significant reliance on static timing plans designed with data collected from static field studies. This research explored the possibility of using travel-time data obtained from Bluetooth and WiFi sniffing. Bluetooth and WiFi sniffing is an emerging Big Data approach that takes advantage of the ability to track and monitor unique devices as they move from location to location. An approach to re-time signals using an adaptive system was developed, analysed, and tested under varying conditions. The results of this work showed that this data could be used to improve delays by as much as 10\% when compared to traditional approaches. More importantly, this approach demonstrated that it is possible to re-time signals using a readily available and dynamic data source without the need for field volume studies. In addition to Big Data technologies, Artificial Intelligence (AI) is increasingly playing an important role in modern technologies. AI is already being used to make complex decisions, categorise images, and can best humans in complex strategy games. While AI shows promise, applications to Traffic Engineering have been limtied. This research has advanced the state-of-the art by conducting a systematic sensitivity study on an AI technique, Deep Reinforcement Learning. This thesis investigated and identified optimal settings for key parameters such as the discount factor, learning rate, and reward functions. This thesis also developed and tested a complete framework that could potentially be applied to evaluate AI techniques in field settings. This includes applications of AI techniques such as transfer learning to reduce training times. Finally, this thesis also examined framings for multi-intersection control, including comparisons to existing state-of-the art approaches such as SCOOT

    Modeling and Managing Engineering Changes in a Complex Product Development Process

    Get PDF
    Today\u27s hyper-competitive worldwide market, turbulent environment, demanding customers, and diverse technological advancements force any corporations who develop new products to look into all the possible areas of improvement in the entire product lifecycle management process. One of the areas that both scholars and practitioners have overlooked in the past is Engineering Change Management (ECM). The vision behind this dissertation is to ultimately bridge this gap by identifying main characteristics of a New Product Development (NPD) process that are potentially associated with the occurrence and magnitude of iterations and Engineering Changes (ECs), developing means to quantify these characteristics as well as the interrelationships between them in a computer simulation model, testing the effects of different parameter settings and various coordination policies on project performance, and finally gaining operational insights considering all relevant EC impacts. The causes for four major ECM problems (occurrence of ECs, long EC lead time, high EC cost, and occurrence frequency of iterations and ECs), are first discussed diagrammatically and qualitatively. Factors that contribute to particular system behavior patterns and the causal links between them are identified through the exploratory construction of causal/causal-loop diagrams. To further understand the nature of NPD/ECM problems and verify the key assumptions made in the conceptual causal framework, three field survey studies were conducted in the summer of 2010 and 2011. Information and data were collected to assess the current practice in automobile and information technology industries where EC problems are commonly encountered. ased upon the intuitive understanding gained from these two preparation work, a Discrete Event Simulation (DES) model is proposed. In addition to combining essential project features, such as concurrent engineering, cross functional integration, resource constraints, etc., it is distinct from existing research by introducing the capability of differentiating and characterizing various levels of uncertainties (activity uncertainty, solution uncertainty, and environmental uncertainty) that are dynamically associated with an NPD project and consequently result in stochastic occurrence of NPD iterations and ECs of two different types (emergent ECs and initiated ECs) as the project unfolds. Moreover, feedback-loop relationships among model variables are included in the DES model to enable more accurate prediction of dynamic work flow. Using a numerical example, different project-related model features (e.g., learning curve effects, rework likelihood, and level of dependency of product configuration) and coordination policies (e.g., overlapping strategy, rework review strategy, IEC batching policy, and resource allocation policy) are tested and analyzed in detail concerning three major performance indicators: lead time, cost, and quality, based on which decision-making suggestions regarding EC impacts are drawn from a systems perspective. Simulation results confirm that the nonlinear dynamics of interactions between NPD and ECM plays a vital role in determining the final performance of development efforts

    Online Simulation in Semiconductor Manufacturing

    Get PDF
    In semiconductor manufacturing discrete event simulation systems are quite established to support multiple planning decisions. During the recent years, the productivity is increasing by using simulation methods. The motivation for this thesis is to use online simulation not only for planning decisions, but also for a wide range of operational decisions. Therefore an integrated online simulation system for short term forecasting has been developed. The production environment is a mature high mix logic wafer fab. It has been selected because of its vast potential for performance improvement. In this thesis several aspects of online simulation will be addressed: The first aspect is the implementation of an online simulation system in semiconductor manufacturing. The general problem is to achieve a high speed, a high level of detail, and a high forecast accuracy. To resolve these problems, an online simulation system has been created. The simulation model has a high level of detail. It is created automatically from underling fab data. To create such a simulation model from fab data, additional problems related to the underlying data arise. The major parts are the data access, the data integration, and the data quality. These problems have been solved by using an integrated data model with several data extraction, data transformation, and data cleaning steps. The second aspect is related to the accuracy of online simulation. The overall problem is to increase the forecast horizon, increase the level of detail of the forecast and reduce the forecast error. To provide useful forecast results, the simulation model contains a high level of modeling details and a proper initialization. The influences on the forecast quality will be analyzed. The results show that the simulation forecast accuracy achieves good quality to predict future fab performance. The last aspect is to find ways to use simulation forecast results to improve the fab performance. Numerous applications have been identified. For each application a description is available. It contains the requirements of such a forecast, the decision variables, and background information. An application example shows, where a performance problem exists and how online simulation is able to resolve it. To further enhance the real time capability of online simulation, a major part is to investigate new ways to connect the simulation model with the wafer fab. For fab driven simulation, the simulation model and the real wafer fab run concurrently. The wafer fab provides several events to update the simulation during runtime. So the model is always synchronized with the real fab. It becomes possible to start a simulation run in real time. There is no further delay for data extraction, data transformation and model creation. A prototype for a single work center has been implemented to show the feasibility

    SLA Calculus

    Get PDF
    For modeling Service-Oriented Architectures (SOAs) and validating worst-case performance guarantees a deterministic modeling method with efficient analysis is presented. Upper and lower bounds for delay and workload in systems are used to describe performance contracts. The SLA Calculus allows one to combine model descriptions for single systems and to derive bounds for reaction time and capacity of composed systems with analytic means. The intended, but not exclusive modeling domain for SLA Calculus are distributed software systems with reaction time constraints. SOAs are a system design paradigm that encapsulate software functions in service applications. Due to their standardized interfaces and accessibility via networks, large systems can be composed from smaller services and presented as services again. A well-known implementation of the service paradigm are Web Services that allow applications with components connected by the Internet. Own services and those rented from providers can be transparently combined by users. Performance guarantees for SOAs gain importance with more complex systems and applications in business environments When a service is rented by a customer the provider agrees upon a Service Level Agreement (SLA) with conditions concerning interface, pricing and performance. Service reaction time in form of delay is an important part in many SLAs and subject to performance models discussed in this work. With SLAs providers implicate a maximum delay for their products when the customer limits the workload to their systems. Hence customers expect the contracted service provider to deliver the performance figures unless the workload exceeds the SLA. Since contract penalties could apply, providers have a natural interest in dimensioning their service in regard to the SLA. Even for maximum workloads specified in the contracts the worst-case delay has to hold. Moreover, due to the compositional nature of Web Services, customers become providers themselves when they offer their service compositions to others. Again, worst-case performance bounds are of major interest here. Analyzing models of SOAs is an option to plan, dimension and validate service performance. For system modeling and analysis many methods exist. Queueing Systems and simulation are two well-known approaches in computer science. They provide average and thus long-term performance numbers quite easily using, probabilistic workload and service process descriptions. Deriving system behavior in worst-case situations for performance guarantees is elaborative and can be impossible for more complex systems. Receiving delay bounds usable in SLAs for SOAs by model analysis is still a research issue. A promising candidate to model SOA with SLAs is Network Calculus, an analytical method to derive performance bounds for network components. Given deterministic descriptions for arrival to and service in a network node hard bounds for network delay and the required buffer memory in routers are computed. A fine-granular separation between short- and long-term goals is possible. Network Calculus models also feature composition of elements and fast analytical analysis. When applied to SOAs with SLAs the problem arises that SLAs are not suitable as a system description and information source for Network Calculus models. Especially the internal service capacity is not exposed by SLAs, since providers consider them as a business secret. Without service process descriptions Network Calculus models cannot be analyzed. The SLA Calculus is presented as a solution to this problem. As a novel contribution for deterministic model analysis for SOAs, SLA Calculus is an extension to Network Calculus. Instead of service process descriptions, it uses information on latency to characterize a system. Delay of services is not a scalar analysis result anymore, it becomes a process over time that is bound with Network Calculus-style curves, the delay curves. Together with arrival curves the performance contracts in SLAs are formalized by so-called SLA Delay Properties (SDPs) as a description for the service performance in worst-case. Service composition can be modeled by serial and parallel combination of SDPs. The necessary theorems for the resulting worst-case bounds are given and proved. We will present a method to transfer these performance figures to the missing service process description again. Apart from basic theory we will also consider solutions for practical modeling situations. An algorithm to extract arrival and delay curves from measurements, enables the modeler to include already existing systems without given SLAs as model elements. Finally, we will sketch a selection method in form of an optimization problem for services to support the dynamic service selection in SOAs with a Service Broker. SLA Calculus model analysis will deliver deterministic upper and lower bounds for workload capacities and response times. For upper bounds the worst-case is assumed, thus bounds are pessimistic. The advantage of SLA Calculus is the ability to compute these bounds very fast and to give system modelers a quick overview on system characteristics considering extreme situations. In other modeling methods a lengthy transient analysis would be required. The strict perspective towards worst-case brought up another analysis target: Until now, relatively little attention was paid to contract conformance between subsequent services within service compositions. When services offer different workload capacities the arrival rate to the system needs to be adjusted to avoid bottlenecks. Additionally, for service compositions no response time contract can be guaranteed without internal buffering to enforce a common arrival rate. SLA Calculus unveils the necessary buffer delays and is able to bound them
    corecore