1,466 research outputs found

    Fluid flow switching servers : control and observer design

    Get PDF

    Call Center Capacity Planning

    Get PDF

    Analysis of buffer allocations in time-dependent and stochastic flow lines

    Full text link
    This thesis reviews and classifies the literature on the Buffer Allocation Problem under steady-state conditions and on performance evaluation approaches for queueing systems with time-dependent parameters. Subsequently, new performance evaluation approaches are developed. Finally, a local search algorithm for the derivation of time-dependent buffer allocations is proposed. The algorithm is based on numerically observed monotonicity properties of the system performance in the time-dependent buffer allocations. Numerical examples illustrate that time-dependent buffer allocations represent an adequate way of minimizing the average WIP in the flow line while achieving a desired service level

    Prioritizing Patients for Emergency Evacuation From a Healthcare Facility

    Get PDF
    The success of a healthcare facility evacuation depends on communication and decision-making at all levels of the organization, from the coordinators at incident command to the clinical staff who actually carry out the evacuation. One key decision is the order in which each patient is chosen for evacuation. While the typical planning assumption is that all patients are to be evacuated, there may not always be adequate time or resources available to move all patients. In these cases, prioritizing or ordering patients for evacuation becomes an extremely difficult decision to make. These decisions should be based on the current state of the facility, but without knowledge of the current patient roster or available resources, these decisions may not be as beneficial as possible. Healthcare facilities usually consider evacuation a last-resort measure, and there are often system redundancies in place to protect against having to completely evacuate all patients from a facility. Perhaps this is why there is not a great deal of research dedicated to improving patient transfers. In addition, the question of patient prioritization is a highly ethical one. Based on a literature review of 1) suggested patient prioritization strategies for evacuation planning as well as 2) the actual priorities given in actual facility evacuations indicates there is a lack of consensus as to whether critical or non-critical care patients should be moved away from a facility first in the event of a complete emergency evacuation. In addition, these policies are \u27all-or-nothing\u27 policies, implying that once a patient group is given priority, this entire group will be completely evacuated before any patients from the other group are transferred. That is, if critical care patients are given priority, all critical care patients will be transferred away from the facility before any non-critical care patient. The goal of this research is to develop a decision framework for prioritizing patient evacuations, where unique classifications of patient health, rates of evacuation, and survivability all impact the choice. First, I provide several scenarios (both in terms of physical processing estimates as well as competing, ethically-motivated objectives) and offer insights and observations into the creation of a prioritization policy via dynamic programming. Dynamic programming is a problem-solving technique to recursively optimize a series of decisions. The results of the dynamic programming provide optimal prioritization policies, and these are tested with simulation analysis to observe system performance under many of the same scenarios. Because the dynamic programming decisions are based on the state of the system, simulation also allows the testing of time-based decisions. The results from the dynamic programming and simulation, as well as the structural properties of the simulation are used to create assumptions about how evacuations could be improved. The question is not whether patient priorities should be assigned - but how patient priorities should be assigned. Associated with assigning value to patients are a variety of ethical dilemmas. In this research, I attempt to address patient prioritization from an ethical perspective by discussing the basic principles and the potential dilemmas associated with such decisions. The results indicate that an all-or-nothing, or a \u27greedy\u27 policy as discussed in the literature may not always be optimal for patient evacuations. In some cases, a switching policy may occur. Switching policies begin by evacuating patients from one classification and then switch to begin evacuations from the second patient class. A switch can only be made once; after a switch is made, all remaining patients from the new group should be evacuated. When there are no more patients of that group remaining in the system, the remaining patients from the class that was initially given priority should be evacuated. In the case of critical and non-critical care patients, switching policies first give priority to non-critical care patients. When the costs of holding patients in the system are not included in the models - and the decisions are just based on maximizing the number of saved lives - the switching policies may perform as good or better than the greedy policies suggested in the literature. In addition, when holding costs are not included, it is easier to predict whether the optimal policy is a greedy policy or a switching policy. Prioritization policies can change based on the utility achieved from evacuating individual patients from each class, as well as for other competing objective functions. This research examines a variety of scenarios - maximizing saved lives, minimizing costs, etc. - and provides insights on how the selection of an objective impacts the choice. Another insight of this research is how multiple evacuation teams should be allocated to patients. In the event that there is more than one evacuation team dedicated to moving a group of patients, the two teams should be allocated to the same patient group instead of being split between the multiple patient groups

    A World-Class University-Industry Consortium for Wind Energy Research, Education, and Workforce Development: Final Technical Report

    Get PDF
    During the two-year project period, the consortium members have developed control algorithms for enhancing the reliability of wind turbine components. The consortium members have developed advanced operation and planning tools for accommodating the high penetration of variable wind energy. The consortium members have developed extensive education and research programs for educating the stakeholders on critical issues related to the wind energy research and development. In summary, The Consortium procured one utility-grade wind unit and two small wind units. Specifically, the Consortium procured a 1.5MW GE wind unit by working with the world leading wind energy developer, Invenergy, which is headquartered in Chicago, in September 2010. The Consortium also installed advanced instrumentation on the turbine and performed relevant turbine reliability studies. The site for the wind unit is InvenergyÃÂÃÂÃÂâÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂs Grand Ridge wind farmin Illinois. The Consortium, by working with Viryd Technologies, installed an 8kW Viryd wind unit (the Lab Unit) at an engineering lab at IIT in September 2010 and an 8kW Viryd wind unit (the Field Unit) at the Stuart Field on IITÃÂÃÂÃÂâÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂs main campus in July 2011, and performed relevant turbine reliability studies. The operation of the Field Unit is also monitored by the Phasor Measurement Unit (PMU) in the nearby Stuart Building. The Consortium commemorated the installations at the July 20, 2011 ribbon-cutting ceremony. The ConsortiumÃÂÃÂÃÂâÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂs researches on turbine reliability included (1) Predictive Analytics to Improve Wind Turbine Reliability; (2) Improve Wind Turbine Power Output and Reduce Dynamic Stress Loading Through Advanced Wind Sensing Technology; (3) Use High Magnetic Density Turbine Generator as Non-rare Earth Power Dense Alternative; (4) Survivable Operation of Three Phase AC Drives in Wind Generator Systems; (5) Localization of Wind Turbine Noise Sources Using a Compact Microphone Array; (6) Wind Turbine Acoustics - Numerical Studies; and (7) Performance of Wind Turbines in Rainy Conditions. The ConsortiumÃÂÃÂÃÂâÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂs researches on wind integration included (1) Analysis of 2030 Large-Scale Wind Energy Integration in the Eastern Interconnection; (2) Large-scale Analysis of 2018 Wind Energy Integration in the Eastern U.S. Interconnection; (3) Integration of Non-dispatchable Resources in Electricity Markets; (4) Integration of Wind Unit with Microgrid. The ConsortiumÃÂÃÂÃÂâÃÂÃÂÃÂÃÂÃÂÃÂÃÂÃÂs education and outreach activities on wind energy included (1) Wind Energy Training Facility Development; (2) Wind Energy Course Development; (3) Wind Energy Outreach

    VihreäIT metriikoiden analysointi sekä mittausviitekehyksen luonti Sonera Helsinki Datakeskus (HDC) projektille.

    Get PDF
    The two objectives of this thesis were to investigate and evaluate the most suitable set of energy efficiency metrics for Sonera Helsinki Data Center (HDC), and to analyze which energy efficient technologies could be implemented and in what order to gain most impact. Sustainable IT is a complex matter, and it has two components. First and the more complex matter is the energy efficiency and energy-proportionality of the IT environment. The second is the use of renewable energy sources. Both of these need to be addressed. This thesis is a theoretical study, and it focuses on energy efficiency. The use of off-site renewables is outside of the scope of this thesis. The main aim of this thesis is to improve energy efficiency through effective metric framework. In the final metric framework, metrics that target renewable energy usage in the data center are included as they are important from CO2 emission reduction perspective. The selection of energy efficient solutions in this thesis are examples from most important data center technology categories, and do not try to cover the whole array of different solutions to improve energy efficiency in a data center. The ontological goal is to present main energy efficiency metrics available in scientific discourse, and also present examples of energy efficient solutions in most energy consuming technology domains inside the data center. Even though some of the concepts are quite abstract, realism is taken into account in every analysis. The epistemology in this thesis is based on scientific articles that include empirical validation and scientific peer review. This forms the origin of the used knowledge and the nature of this knowledge. The findings from this thesis are considered valid and reliable based on the epistemology of scientific articles, and by using the actual planning documents of Sonera HDC. The reasoning in this thesis is done in abstracto, but there are many empirical results that qualify the results also as ´in concreto´. Findings are significant for Sonera HDC but they are also applicable for any general data center project or company seeking energy efficiency in their data centers.Lopputyöllä on kaksi päätavoitetta. Ensimmäinen tavoite on löytää sopivin mittausviitekehys energiatehokkuuden osoittamiseksi Sonera Helsinki Datakeskukselle (HDC). Toisena tavoitteena on analysoida, mitä energiatehokkaita ratkaisuja tulisi implementoida ja missä järjestyksessä, saavuttaakseen mahdollisimman ison vaikutuksen. Vihreä IT on monimutkainen asia ja samalla siihen liittyy kaksi eri komponenttia. Ensimmäisenä komponenttina, ja merkityksellisempänä sekä monimutkaisempana, on energiatehokkuus ja energian kulutuksen mukautuvuus suhteessa työkuormaan. Toinen komponentti vihreän IT:n osalta on uusiutuvien energialähteiden käyttäminen. Molemmat komponentit on huomioitava. Lopputyö on teoreettinen tutkimus. Lopputyön ontologinen tavoite on esittää keskeisimmät energiatehokkuusmittarit, jotka ovat saatavilla tieteellisessä keskustelussa, ja esittää myös esimerkkejä energiatehokkaista ratkaisuista teknologia-alueisiin, jotka kuluttavat eniten energiaa data keskuksissa. Vaikka osa esitetyistä ratkaisuista on melko abstraktissa todellisuudessa, realismi on pyritty ottamaan huomioon arvioita tehdessä. Epistemologisesti tämä lopputyö perustuu tieteellisiin artikkeleihin, joissa on tehty empiiristä validointia ja tiedeyhteisön vertaisarviointia tiedon totuusarvosta. Kirjoittaja pyrkii välttämään oman arvomaailman ja subjektiivisen näkemyksen tuomista analyysiin pyrkimällä enemmänkin arvioimaan ratkaisuja perustuen päätavoitteeseen, joka on sekä lisätä energiatehokkuutta että vähentää CO2 -päästöjä datakeskuksessa. Lopputyön löydökset todetaan valideiksi ja luotettaviksi, koska ne perustuvat tieteellisten artikkeleiden epistemologiaan ja siihen, että arvioinnin pohjana on käytetty todellisia Sonera HDC -projektin suunnitteludokumentteja. Päätelmät ja analyysit ovat abstrahoituja, mutta perustuvat empiirisiin tuloksiin, jotka koskevat käytännön tekemistä sekä valintoja. Löydökset ovat merkittäviä Sonera HDC -projektin kannalta, ja myös muille datakeskuksille, jotka haluavat toimia kestävän kehityksen pohjalta

    Three Essays on Resource Allocation: Load Balancing on Highly Variable Service Time Networks, Managing Default Risk via Subsidies and Supplier Diversification, and Optimal Hotel Room Assignment.

    Full text link
    The first essay considers a service center with two stations in accordance with independent Poisson processes. Service times at either station follow the same general distribution, are independent of each other and are independent of the arrival process. The system is charged station-dependent holding costs at each station per customer per unit time. At any point in time, a decision-maker may decide to move, at a cost, some number of jobs from one queue to the other. We study the problem with the purpose of providing insights into this decision-making scenario. We do so, in the important case that the service time distribution is highly variable or simply has a heavy tail. We propose that the savvy use of Markov decision processes can lead to easily implementable heuristics when features of the service time distribution can be captured by introducing multiple customer classes. The second essay studies the problem solved by a manufacturer who faces supplier disruptions. In order to understand the interactions between three strategies (subsidizing the supplier, supplier diversification, and the creation of back-up inventory), the problem is analyzed using a simple model with inventory storage costs and shortage penalties. The model allows us to derive conditions when these strategies are appropriate, either in isolation or in combination. A sensitivity analysis shows that the optimal decisions may not change monotonically when the parameters change. The third essay studies a hotel room assignment problem. The assignment is generally performed by the front desk staff on the arrival day using a lexicographic approach, but this may create empty room-nights between bookings that are hard to fill. This problem shares some features with the job shop problem and with the classroom assignment problem, both of which have been studied in the literature, but the problem itself has not been widely studied. We suggest a heuristic method to solve it, which can be run in a short time with the nightly batch operations that hotels routinely perform. The algorithm considerably improves the results from the lexicographic approach.Ph.D.Industrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/75816/1/fuentesl_1.pd

    Internet of Things From Hype to Reality

    Get PDF
    The Internet of Things (IoT) has gained significant mindshare, let alone attention, in academia and the industry especially over the past few years. The reasons behind this interest are the potential capabilities that IoT promises to offer. On the personal level, it paints a picture of a future world where all the things in our ambient environment are connected to the Internet and seamlessly communicate with each other to operate intelligently. The ultimate goal is to enable objects around us to efficiently sense our surroundings, inexpensively communicate, and ultimately create a better environment for us: one where everyday objects act based on what we need and like without explicit instructions

    5G-PPP Technology Board:Delivery of 5G Services Indoors - the wireless wire challenge and solutions

    Get PDF
    The 5G Public Private Partnership (5G PPP) has focused its research and innovation activities mainly on outdoor use cases and supporting the user and its applications while on the move. However, many use cases inherently apply in indoor environments whereas their requirements are not always properly reflected by the requirements eminent for outdoor applications. The best example for indoor applications can be found is the Industry 4.0 vertical, in which most described use cases are occurring in a manufacturing hall. Other environments exhibit similar characteristics such as commercial spaces in offices, shopping malls and commercial buildings. We can find further similar environments in the media & entertainment sector, culture sector with museums and the transportation sector with metro tunnels. Finally in the residential space we can observe a strong trend for wireless connectivity of appliances and devices in the home. Some of these spaces are exhibiting very high requirements among others in terms of device density, high-accuracy localisation, reliability, latency, time sensitivity, coverage and service continuity. The delivery of 5G services to these spaces has to consider the specificities of the indoor environments, in which the radio propagation characteristics are different and in the case of deep indoor scenarios, external radio signals cannot penetrate building construction materials. Furthermore, these spaces are usually “polluted” by existing wireless technologies, causing a multitude of interreference issues with 5G radio technologies. Nevertheless, there exist cases in which the co-existence of 5G new radio and other radio technologies may be sensible, such as for offloading local traffic. In any case the deployment of networks indoors is advised to consider and be planned along existing infrastructure, like powerlines and available shafts for other utilities. Finally indoor environments expose administrative cross-domain issues, and in some cases so called non-public networks, foreseen by 3GPP, could be an attractive deployment model for the owner/tenant of a private space and for the mobile network operators serving the area. Technology-wise there exist a number of solutions for indoor RAN deployment, ranging from small cell architectures, optical wireless/visual light communication, and THz communication utilising reconfigurable intelligent surfaces. For service delivery the concept of multi-access edge computing is well tailored to host virtual network functions needed in the indoor environment, including but not limited to functions supporting localisation, security, load balancing, video optimisation and multi-source streaming. Measurements of key performance indicators in indoor environments indicate that with proper planning and consideration of the environment characteristics, available solutions can deliver on the expectations. Measurements have been conducted regarding throughput and reliability in the mmWave and optical wireless communication cases, electric and magnetic field measurements, round trip latency measurements, as well as high-accuracy positioning in laboratory environment. Overall, the results so far are encouraging and indicate that 5G and beyond networks must advance further in order to meet the demands of future emerging intelligent automation systems in the next 10 years. Highly advanced industrial environments present challenges for 5G specifications, spanning congestion, interference, security and safety concerns, high power consumption, restricted propagation and poor location accuracy within the radio and core backbone communication networks for the massive IoT use cases, especially inside buildings. 6G and beyond 5G deployments for industrial networks will be increasingly denser, heterogeneous and dynamic, posing stricter performance requirements on the network. The large volume of data generated by future connected devices will put a strain on networks. It is therefore fundamental to discriminate the value of information to maximize the utility for the end users with limited network resources

    Pooling and polling : creation of pooling in inventory and queueing models

    Get PDF
    The subject of the present monograph is the ‘Creation of Pooling in Inventory and Queueing Models’. This research consists of the study of sharing a scarce resource (such as inventory, server capacity, or production capacity) between multiple customer classes. This is called pooling, where the goal is to achieve cost or waiting time reductions. For the queueing and inventory models studied, both theoretical, scientific insights, are generated, as well as strategies which are applicable in practice. This monograph consists of two parts: pooling and polling. In both research streams, a scarce resource (inventory or server capacity, respectively production capacity) has to be shared between multiple users. In the first part of the thesis, pooling is applied to multi-location inventory models. It is studied how cost reduction can be achieved by the use of stock transfers between local warehouses, so-called lateral transshipments. In this way, stock is pooled between the warehouses. The setting is motivated by a spare parts inventory network, where critical components of technically advanced machines are kept on stock, to reduce down time durations. We create insights into the question when lateral transshipments lead to cost reductions, by studying several models. Firstly, a system with two stock points is studied, for which we completely characterize the structure of the optimal policy, using dynamic programming. For this, we formulate the model as a Markov decision process. We also derived conditions under which simple, easy to implement, policies are always optimal, such as a hold back policy and a complete pooling policy. Furthermore, we identified the parameter settings under which cost savings can be achieved. Secondly, we characterize the optimal policy structure for a multi-location model where only one stock point issues lateral transshipments, a so-called quick response warehouse. Thirdly, we apply the insights generated to the general multi-location model with lateral transshipments. We propose the use of a hold back policy, and construct a new approximation algorithm for deriving the performance characteristics. It is based on the use of interrupted Poisson processes. The algorithm is shown to be very accurate, and can be used for the optimization of the hold back levels, the parameters of this class of policies. Also, we study related inventory models, where a single stock point servers multiple customers classes. Furthermore, the pooling of server capacity is studied. For a two queue model where the head-of-line processor sharing discipline is applied, we derive the optimal control policy for dividing the servers attention, as well as for accepting customers. Also, a server farm with an infinite number of servers is studied, where servers can be turned off after a service completion in order to save costs. We characterize the optimal policy for this model. In the second part of the thesis polling models are studied, which are queueing systems where multiple queues are served by a single server. An application is the production of multiple types of products on a single machine. In this way, the production capacity is pooled between the product types. For the classical polling model, we derive a closedform approximation for the mean waiting time at each of the queues. The approximation is based on the interpolation of light and heavy traffic results. Also, we study a system with so-called smart customers, where the arrival rate at a queue depends on the position of the server. Finally, we invent two new service disciplines (the gated/exhaustive and the ??-gated discipline) for polling models, designed to yield ’fairness and efficiency’ in the mean waiting times. That is, they result in almost equal mean waiting times at each of the queues, without increasing the weighted sum of the mean waiting times too much
    corecore