349 research outputs found

    Pedestrian Dynamics: Modeling and Analyzing Cognitive Processes and Traffic Flows to Evaluate Facility Service Level

    Get PDF
    Walking is the oldest and foremost mode of transportation through history and the prevalence of walking has increased. Effective pedestrian model is crucial to evaluate pedestrian facility service level and to enhance pedestrian safety, performance, and satisfaction. The objectives of this study were to: (1) validate the efficacy of utilizing queueing network model, which predicts cognitive information processing time and task performance; (2) develop a generalized queueing network based cognitive information processing model that can be utilized and applied to construct pedestrian cognitive structure and estimate the reaction time with the first moment of service time distribution; (3) investigate pedestrian behavior through naturalistic and experimental observations to analyze the effects of environment settings and psychological factors in pedestrians; and (4) develop pedestrian level of service (LOS) metrics that are quick and practical to identify improvement points in pedestrian facility design. Two empirical and two analytical studies were conducted to address the research objectives. The first study investigated the efficacy of utilizing queueing network in modeling and predicting the cognitive information processing time. Motion capture system was utilized to collect detailed pedestrian movement. The predicted reaction time using queueing network was compared with the results from the empirical study to validate the performance of the model. No significant difference between model and empirical results was found with respect to mean reaction time. The second study endeavored to develop a generalized queueing network system so the task can be modeled with the approximated queueing network and its first moment of any service time distribution. There was no significant difference between empirical study results and the proposed model with respect to mean reaction time. Third study investigated methods to quantify pedestrian traffic behavior, and analyze physical and cognitive behavior from the real-world observation and field experiment. Footage from indoor and outdoor corridor was used to quantify pedestrian behavior. Effects of environmental setting and/or psychological factor on travel performance were tested. Finally, adhoc and tailor-made LOS metrics were presented for simple realistic service level assessments. The proposed methodologies were composed of space revision LOS, delay-based LOS, preferred walking speed-based LOS, and ‘blocking probability’

    Data-Driven Robust Optimization in Healthcare Applications

    Get PDF
    abstract: Healthcare operations have enjoyed reduced costs, improved patient safety, and innovation in healthcare policy over a huge variety of applications by tackling prob- lems via the creation and optimization of descriptive mathematical models to guide decision-making. Despite these accomplishments, models are stylized representations of real-world applications, reliant on accurate estimations from historical data to jus- tify their underlying assumptions. To protect against unreliable estimations which can adversely affect the decisions generated from applications dependent on fully- realized models, techniques that are robust against misspecications are utilized while still making use of incoming data for learning. Hence, new robust techniques are ap- plied that (1) allow for the decision-maker to express a spectrum of pessimism against model uncertainties while (2) still utilizing incoming data for learning. Two main ap- plications are investigated with respect to these goals, the first being a percentile optimization technique with respect to a multi-class queueing system for application in hospital Emergency Departments. The second studies the use of robust forecasting techniques in improving developing countries’ vaccine supply chains via (1) an inno- vative outside of cold chain policy and (2) a district-managed approach to inventory control. Both of these research application areas utilize data-driven approaches that feature learning and pessimism-controlled robustness.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Waiting Patiently: An Empirical Study of Queue Abandonment in an Emergency Department

    Get PDF
    We study queue abandonment from a hospital emergency department. We show that abandonment is influenced by the queue length and the observable queue flows during the waiting exposure, even after controlling for wait time. For example, observing an additional person in the queue or an additional arrival to the queue leads to an increase in abandonment probability equivalent to a 25-minute or 5-minute increase in wait time, respectively. We also show that patients are sensitive to being “jumped” in the line and that patients respond differently to people more sick and less sick moving through the system. This customer response to visual queue elements is not currently accounted for in most queuing models. Additionally, to the extent the visual queue information is misleading or does not lead to the desired behavior, managers have an opportunity to intervene by altering what information is available to waiting customers

    Scheduling for today’s computer systems: bridging theory and practice

    Get PDF
    Scheduling is a fundamental technique for improving performance in computer systems. From web servers to routers to operating systems, how the bottleneck device is scheduled has an enormous impact on the performance of the system as a whole. Given the immense literature studying scheduling, it is easy to think that we already understand enough about scheduling. But, modern computer system designs have highlighted a number of disconnects between traditional analytic results and the needs of system designers. In particular, the idealized policies, metrics, and models used by analytic researchers do not match the policies, metrics, and scenarios that appear in real systems. The goal of this thesis is to take a step towards modernizing the theory of scheduling in order to provide results that apply to today’s computer systems, and thus ease the burden on system designers. To accomplish this goal, we provide new results that help to bridge each of the disconnects mentioned above. We will move beyond the study of idealized policies by introducing a new analytic framework where the focus is on scheduling heuristics and techniques rather than individual policies. By moving beyond the study of individual policies, our results apply to the complex hybrid policies that are often used in practice. For example, our results enable designers to understand how the policies that favor small job sizes are affected by the fact that real systems only have estimates of job sizes. In addition, we move beyond the study of mean response time and provide results characterizing the distribution of response time and the fairness of scheduling policies. These results allow us to understand how scheduling affects QoS guarantees and whether favoring small job sizes results in large job sizes being treated unfairly. Finally, we move beyond the simplified models traditionally used in scheduling research and provide results characterizing the effectiveness of scheduling in multiserver systems and when users are interactive. These results allow us to answer questions about the how to design multiserver systems and how to choose a workload generator when evaluating new scheduling designs

    An Approach for Guiding Developers to Performance and Scalability Solutions

    Get PDF
    This thesis proposes an approach that enables developers who are novices in software performance engineering to solve software performance and scalability problems without the assistance of a software performance expert. The contribution of this thesis is the explicit consideration of the implementation level to recommend solutions for software performance and scalability problems. This includes a set of description languages for data representation and human computer interaction and a workflow

    Cost-Sensitive Concurrent Planning Under Duration Uncertainty for Service-Level Agreements

    Get PDF
    This paper brings together work in stochastic modelling, using the process algebra PEPA, and work in automated planning. Stochastic modelling has been concerned with verification of system performance metrics for some time: given a model of a system, determining whether it will meet a service-level agreement (SLA). For example, whether a given sequence of transitions on a network will complete within 5 seconds 80% of the time. The problem of deciding how to reconfigure the system most cost-effectively when the SLA cannot be met has not been widely explored: it is currently solved manually. Inspired by this, we consider how planning can be used to automate the configuration of service-oriented systems. Configuring these stochastic systems presents new challenges to planning: building plans that meet SLAs, but also have low cost. To this end, we present a domain-independent planner for planning problems with action costs and stochastic durations, and show how this can be used to solve both traditional planning domains, and within the framework of configuring a larger process algebra model

    Dependence-driven techniques in system design

    Get PDF
    Burstiness in workloads is often found in multi-tier architectures, storage systems, and communication networks. This feature is extremely important in system design because it can significantly degrade system performance and availability. This dissertation focuses on how to use knowledge of burstiness to develop new techniques and tools for performance prediction, scheduling, and resource allocation under bursty workload conditions.;For multi-tier enterprise systems, burstiness in the service times is catastrophic for performance. Via detailed experimentation, we identify the cause of performance degradation on the persistent bottleneck switch among various servers. This results in an unstable behavior that cannot be captured by existing capacity planning models. In this dissertation, beyond identifying the cause and effects of bottleneck switch in multi-tier systems, we also propose modifications to the classic TPC-W benchmark to emulate bursty arrivals in multi-tier systems.;This dissertation also demonstrates how burstiness can be used to improve system performance. Two dependence-driven scheduling policies, SWAP and ALoC, are developed. These general scheduling policies counteract burstiness in workloads and maintain high availability by delaying selected requests that contribute to burstiness. Extensive experiments show that both SWAP and ALoC achieve good estimates of service times based on the knowledge of burstiness in the service process. as a result, SWAP successfully approximates the shortest job first (SJF) scheduling without requiring a priori information of job service times. ALoC adaptively controls system load by infinitely delaying only a small fraction of the incoming requests.;The knowledge of burstiness can also be used to forecast the length of idle intervals in storage systems. In practice, background activities are scheduled during system idle times. The scheduling of background jobs is crucial in terms of the performance degradation of foreground jobs and the utilization of idle times. In this dissertation, new background scheduling schemes are designed to determine when and for how long idle times can be used for serving background jobs, without violating predefined performance targets of foreground jobs. Extensive trace-driven simulation results illustrate that the proposed schemes are effective and robust in a wide range of system conditions. Furthermore, if there is burstiness within idle times, then maintenance features like disk scrubbing and intra-disk data redundancy can be successfully scheduled as background activities during idle times

    Cycle Time Estimation in a Semiconductor Wafer Fab: A concatenated Machine Learning Approach

    Get PDF
    Die fortschreitende Digitalisierung aller Bereiche des Lebens und der Industrie lĂ€sst die Nachfrage nach Mikrochips steigen. Immer mehr Branchen – unter anderem auch die Automobilindustrie – stellen fest, dass die Lieferketten heutzutage von den Halbleiterherstellern abhĂ€ngig sind, was kĂŒrzlich zur Halbleiterkrise gefĂŒhrt hat. Diese Situation erhöht den Bedarf an genauen Vorhersagen von Lieferzeiten von Halbleitern. Da aber deren Produktion extrem schwierig ist, sind solche SchĂ€tzungen nicht einfach zu erstellen. GĂ€ngige AnsĂ€tze sind entweder zu simpel (z.B. Mittelwert- oder rollierende MittelwertschĂ€tzer) oder benötigen zu viel Zeit fĂŒr detaillierte Szenarioanalysen (z.B. ereignisdiskrete Simulationen). Daher wird in dieser Arbeit eine neue Methodik vorgeschlagen, die genauer als Mittelwert- oder rollierende MittelwertschĂ€tzer, aber schneller als Simulationen sein soll. Diese Methodik nutzt eine Verkettung von Modellen des maschinellen Lernens, die in der Lage sind, Wartezeiten in einer Halbleiterfabrik auf der Grundlage einer Reihe von Merkmalen vorherzusagen. In dieser Arbeit wird diese Methodik entwickelt und analysiert. Sie umfasst eine detaillierte Analyse der fĂŒr jedes Modell benötigten Merkmale, eine Analyse des genauen Produktionsprozesses, den jedes Produkt durchlaufen muss – was als "Route" bezeichnet wird – und entwickelte Strategien zur BewĂ€ltigung von Unsicherheiten, wenn die Merkmalswerte in der Zukunft nicht bekannt sind. ZusĂ€tzlichwird die vorgeschlagene Methodik mit realen Betriebsdaten aus einerWafer-Fabrik der Robert Bosch GmbH evaluiert. Es kann gezeigt werden, dass die Methodik den Mittelwert- und Rollierenden MittelwertschĂ€tzern ĂŒberlegen ist, insbesondere in Situationen, in denen die Zykluszeit eines Loses signifikant vom Mittelwert abweicht. ZusĂ€tzlich kann gezeigt werden, dass die AusfĂŒhrungszeit der Methode signifikant kĂŒrzer ist als die einer detaillierten Simulation

    Queueing-Theoretic End-to-End Latency Modeling of Future Wireless Networks

    Get PDF
    The fifth generation (5G) of mobile communication networks is envisioned to enable a variety of novel applications. These applications demand requirements from the network, which are diverse and challenging. Consequently, the mobile network has to be not only capable to meet the demands of one of these applications, but also be flexible enough that it can be tailored to different needs of various services. Among these new applications, there are use cases that require low latency as well as an ultra-high reliability, e.g., to ensure unobstructed production in factory automation or road safety for (autonomous) transportation. In these domains, the requirements are crucial, since violating them may lead to financial or even human damage. Hence, an ultra-low probability of failure is necessary. Based on this, two major questions arise that are the motivation for this thesis. First, how can ultra-low failure probabilities be evaluated, since experiments or simulations would require a tremendous number of runs and, thus, turn out to be infeasible. Second, given a network that can be configured differently for different applications through the concept of network slicing, which performance can be expected by different parameters and what is their optimal choice, particularly in the presence of other applications. In this thesis, both questions shall be answered by appropriate mathematical modeling of the radio interface and the radio access network. Thereby the aim is to find the distribution of the (end-to-end) latency, allowing to extract stochastic measures such as the mean, the variance, but also ultra-high percentiles at the distribution tail. The percentile analysis eventually leads to the desired evaluation of worst-case scenarios at ultra-low probabilities. Therefore, the mathematical tool of queuing theory is utilized to study video streaming performance and one or multiple (low-latency) applications. One of the key contributions is the development of a numeric algorithm to obtain the latency of general queuing systems for homogeneous as well as for prioritized heterogeneous traffic. This provides the foundation for analyzing and improving end-to-end latency for applications with known traffic distributions in arbitrary network topologies and consisting of one or multiple network slices.Es wird erwartet, dass die fĂŒnfte Mobilfunkgeneration (5G) eine Reihe neuartiger Anwendungen ermöglichen wird. Allerdings stellen diese Anwendungen sowohl sehr unterschiedliche als auch ĂŒberaus herausfordernde Anforderungen an das Netzwerk. Folglich muss das mobile Netz nicht nur die Voraussetzungen einer einzelnen Anwendungen erfĂŒllen, sondern auch flexibel genug sein, um an die Vorgaben unterschiedlicher Dienste angepasst werden zu können. Ein Teil der neuen Anwendungen erfordert hochzuverlĂ€ssige Kommunikation mit niedriger Latenz, um beispielsweise unterbrechungsfreie Produktion in der Fabrikautomatisierung oder Sicherheit im (autonomen) Straßenverkehr zu gewĂ€hrleisten. In diesen Bereichen ist die ErfĂŒllung der gestellten Anforderungen besonders kritisch, da eine Verletzung finanzielle oder sogar personelle SchĂ€den nach sich ziehen könnte. Eine extrem niedrige Ausfallwahrscheinlichkeit ist daher von grĂ¶ĂŸter Wichtigkeit. Daraus ergeben sich zwei wesentliche Fragestellungen, welche diese Arbeit motivieren. Erstens, wie können extrem niedrige Ausfallwahrscheinlichkeiten evaluiert werden. Ihr Nachweis durch Experimente oder Simulationen wĂŒrde eine extrem große Anzahl an DurchlĂ€ufen benötigen und sich daher als nicht realisierbar herausstellen. Zweitens, welche Performanz ist fĂŒr ein gegebenes Netzwerk durch unterschiedliche Konfigurationen zu erwarten und wie kann die optimale Konfiguration gewĂ€hlt werden. Diese Frage ist insbesondere dann interessant, wenn mehrere Anwendungen gleichzeitig bedient werden und durch sogenanntes Slicing fĂŒr jeden Dienst unterschiedliche Konfigurationen möglich sind. In dieser Arbeit werden beide Fragen durch geeignete mathematische Modellierung der Funkschnittstelle sowie des Funkzugangsnetzes (Radio Access Network) adressiert. Mithilfe der Warteschlangentheorie soll die stochastische Verteilung der (Ende-zu-Ende-) Latenz bestimmt werden. Dies liefert unterschiedliche stochastische Metriken, wie den Erwartungswert, die Varianz und insbesondere extrem hohe Perzentile am oberen Rand der Verteilung. Letztere geben schließlich Aufschluss ĂŒber die gesuchten schlimmsten FĂ€lle, die mit sehr geringer Wahrscheinlichkeit eintreten können. In der Arbeit werden Videostreaming und ein oder mehrere niedriglatente Anwendungen untersucht. Zu den wichtigsten BeitrĂ€gen zĂ€hlt dabei die Entwicklung einer numerischen Methode, um die Latenz in allgemeinen Warteschlangensystemen fĂŒr homogenen sowie fĂŒr priorisierten heterogenen Datenverkehr zu bestimmen. Dies legt die Grundlage fĂŒr die Analyse und Verbesserung von Ende-zu-Ende-Latenz fĂŒr Anwendungen mit bekannten Verkehrsverteilungen in beliebigen Netzwerktopologien mit ein oder mehreren Slices
    • 

    corecore