657 research outputs found
Analysis of Multiple Flows using Different High Speed TCP protocols on a General Network
We develop analytical tools for performance analysis of multiple TCP flows
(which could be using TCP CUBIC, TCP Compound, TCP New Reno) passing through a
multi-hop network. We first compute average window size for a single TCP
connection (using CUBIC or Compound TCP) under random losses. We then consider
two techniques to compute steady state throughput for different TCP flows in a
multi-hop network. In the first technique, we approximate the queues as M/G/1
queues. In the second technique, we use an optimization program whose solution
approximates the steady state throughput of the different flows. Our results
match well with ns2 simulations.Comment: Submitted to Performance Evaluatio
State feedback control of switching servers with setups
In this paper we study the control of switching servers, which can for example be found in manufacturing industry. In general, these systems are discrete event systems. A server processes multiple job types. Switching between the job types takes time and during that time, no jobs can be processed, so capacity is lost. How should a server switch between the job types in an efficient way? In this paper we derive the optimal process cycle with respect to work in process levels for a server with two job types and finite buffer capacities. The analysis is performed using a hybrid fluid model approximation. After the optimal process cycle has been defined, a state feedback controller is proposed that steers the trajectory of the system to this optimal cycle. Workstations are often placed in series to form a flowline of servers. Our goal is to control flowlines of switching servers in a way that the work in process level is minimized. In a flowline, only the most downstream workstation influences the work in process level of the system, since upstream workstations simply move jobs from one server to the other. If it is possible to have the most downstream workstation process in its optimal cycle and the other workstations can make this happen, then optimal work in process levels are achieved. This paper investigates under which conditions the upstream workstations can make the most downstream workstation work optimally. Conditions on the upstream workstations are derived and the class of flowlines is characterized for which the optimal process cycle of an isolated downstream workstation can become the optimal process cycle for the flowline. For a flowline consisting of two workstations, a state feedback controller is proposed and convergence to the optimal process cycle is proved mathematically. An extensive case study demonstrates how the controller performs, for both the hybrid fluid model and in a discrete event implementation with stochastic inter-arrival and process times
Controlling the order pool in make-to-order production systems
Voor ‘Make-To-Order’ (MTO, oftewel klantordergestuurde) productiesystemen is de tijd die orders moeten wachten op beschikbare productiecapaciteit cruciaal. Het beheersen van die wachttijd is van groot belang om zowel korte als betrouwbare doorlooptijden te realiseren. Daarom analyseerde en ontwierp Remco Germs regels voor orderacceptatie en ordervrijgave, om daarmee de wachttijden te beheersen. Orderacceptatie en -vrijgave zijn de twee belangrijkste mechanismen om de lengte van wachttijden te beïnvloeden en zodoende de productie te sturen. De logistieke prestatie hangt in grote mate af van specifieke kenmerken van MTO-systemen, zoals routing variabiliteit, beperkte productiecapaciteit, omsteltijden, strikte leveringsvoorwaarden en onzekerheid in het aankomstpatroon van orders.
Om een beter begrip te krijgen van de afwegingen die MTO-bedrijven in dit opzicht moeten maken richt het proefschrift zich op de modellering van de belangrijkste kenmerken van MTO-systemen. De inzichten die dat oplevert worden vervolgens gebruikt om orderacceptatie- en ordervrijgaveregels te ontwikkelen die eenvoudig te begrijpen en daarom makkelijk in praktijksituaties te implementeren zijn. Deze relatief eenvoudige beslissingsregels kunnen al leiden tot significante verbeteringen in de logistieke prestaties van MTO-bedrijven.
The thesis of Remco Germs analyses and develops order acceptance and order release policies to control queues in make-to-order (MTO) production systems. Controlling the time orders spend waiting in queues is crucial for realizing short and reliable delivery times, two performance measures which are of strategic importance for many MTO com-panies. Order acceptance and order release are the two most important production con-trol mechanisms to influence the length of these queues. Their performance depends on typical characteristics of MTO systems, such as random (batch) order arrival, routing variability, fixed capacities, setup times and (strict) due-dates.
To better understand the underlying mechanisms of good order acceptance and order release policies the models in this thesis focus on the main characteristics of MTO systems. The insights obtained from these models are then used to develop order acceptance and order release policies that are easy to understand and thereby easy to implement in practice. The results show that these relatively simple policies may already lead to significant performance improvements for MTO companies.
Routing and Staffing when Servers are Strategic
Traditionally, research focusing on the design of routing and staffing
policies for service systems has modeled servers as having fixed (possibly
heterogeneous) service rates. However, service systems are generally staffed by
people. Furthermore, people respond to workload incentives; that is, how hard a
person works can depend both on how much work there is, and how the work is
divided between the people responsible for it. In a service system, the routing
and staffing policies control such workload incentives; and so the rate servers
work will be impacted by the system's routing and staffing policies. This
observation has consequences when modeling service system performance, and our
objective is to investigate those consequences.
We do this in the context of the M/M/N queue, which is the canonical model
for large service systems. First, we present a model for "strategic" servers
that choose their service rate in order to maximize a trade-off between an
"effort cost", which captures the idea that servers exert more effort when
working at a faster rate, and a "value of idleness", which assumes that servers
value having idle time. Next, we characterize the symmetric Nash equilibrium
service rate under any routing policy that routes based on the server idle
time. We find that the system must operate in a quality-driven regime, in which
servers have idle time, in order for an equilibrium to exist, which implies
that the staffing must have a first-order term that strictly exceeds that of
the common square-root staffing policy. Then, within the class of policies that
admit an equilibrium, we (asymptotically) solve the problem of minimizing the
total cost, when there are linear staffing costs and linear waiting costs.
Finally, we end by exploring the question of whether routing policies that are
based on the service rate, instead of the server idle time, can improve system
performance.Comment: First submitted for journal publication in 2014; accepted for
publication in Operations Research in 2016. Presented in select conferences
throughout 201
Recommended from our members
Destination-based Routing and Circuit Allocation for Future Traffic Growth
Internet traffic continues to grow relentlessly, driven largely by increasingly high- \\ resolution video streaming, the increasing adoption of cloud computing, the emergence of 5G networks, and the ever-growing reach of social media and social networks. Existing networks use packet switching to route packets on a hop-by-hop basis from the source to the destination. However, they suffer from two shortcomings. First, in existing networks, packets are routed along a fixed shortest path using the Open Shortest Path First (OSPF) protocol or obliviously load-balanced across equal-cost paths using the Equal-Cost Multi-Path (ECMP) protocol. These routing protocols do not fully utilize the network capacity because they do not adapt to network congestions in their routing decisions. Second, although studies have shown that the majority of packets processed by Internet routers are pass-through traffic, packets nonetheless have to be queued and routed at every hop in existing networks, which unnecessarily adds substantial delays and processing costs.In this thesis, we present two new approaches to overcome these shortcomings. First, we propose new backpressure-based routing algorithms which use only shortest-path routes when they are sufficient to accommodate the given traffic load, but will incrementally expand routing choices as needed to accommodate increasing traffic loads. This avoids the poor delay performance inherent in backpressure-based routing algorithms where packets may take long detours under light or moderate loads, and still retains the notable advantage, the network-wide optimal throughput, because packets are adaptively routed along less congested paths.Second, we propose a unified packet and circuit switched network in which the underlying optical transport is used to circuit-switch pass-through traffic by means of pre-established circuits. This avoids unnecessary packet queuing delays and processing costs at each hop. We propose a novel convex optimization framework based on a new destination-based multicommodity flow formulation for the allocation of circuits in such unified networks
- …