16,445 research outputs found
Queuing Theoretic Analysis of Power-performance Tradeoff in Power-efficient Computing
In this paper we study the power-performance relationship of power-efficient
computing from a queuing theoretic perspective. We investigate the interplay of
several system operations including processing speed, system on/off decisions,
and server farm size. We identify that there are oftentimes "sweet spots" in
power-efficient operations: there exist optimal combinations of processing
speed and system settings that maximize power efficiency. For the single server
case, a widely deployed threshold mechanism is studied. We show that there
exist optimal processing speed and threshold value pairs that minimize the
power consumption. This holds for the threshold mechanism with job batching.
For the multi-server case, it is shown that there exist best processing speed
and server farm size combinations.Comment: Paper published in CISS 201
Isoperimetric Partitioning: A New Algorithm for Graph Partitioning
Temporal structure is skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefronatal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables such as time-to-contact. At a finer scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over- shoot the amounts needed for precise acts. Each context of action may require a different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive patterns of analog signals. From some parts of the cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine design to serve the lowest and highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between leveels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02582
Adaptive Neural Models of Queuing and Timing in Fluent Action
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.National Institute of Mental Health (R01 DC02852
Cost minimization for unstable concurrent products in multi-stage production line using queueing analysis
This research and resulting contribution are results of Assumption University of Thailand. The university partially supports financially the publication.Purpose: The paper copes with the queueing theory for evaluating a muti-stage production line process with concurrent goods. The intention of this article is to evaluate the efficiency of products assembly in the production line. Design/Methodology/Approach: To elevate the efficiency of the assembly line it is required to control the performance of individual stations. The arrival process of concurrent products is piled up before flowing to each station. All experiments are based on queueing network analysis. Findings: The performance analysis for unstable concurrent sub-items in the production line is discussed. The proposed analysis is based on the improvement of the total sub-production time by lessening the queue time in each station. Practical implications: The collected data are number of workers, incoming and outgoing sub-products, throughput rate, and individual station processing time. The front loading place unpacks product items into concurrent sub-items by an operator and automatically sorts them by RFID tag or bar code identifiers. Experiments of the work based on simulation are compared and validated with results from real approximation. Originality/Value: It is an alternative improvement to increase the efficiency of the operation in each station with minimum costs.peer-reviewe
Two-Hop Routing with Traffic-Differentiation for QoS Guarantee in Wireless Sensor Networks
This paper proposes a Traffic-Differentiated Two-Hop Routing protocol for
Quality of Service (QoS) in Wireless Sensor Networks (WSNs). It targets WSN
applications having different types of data traffic with several priorities.
The protocol achieves to increase Packet Reception Ratio (PRR) and reduce
end-to-end delay while considering multi-queue priority policy, two-hop
neighborhood information, link reliability and power efficiency. The protocol
is modular and utilizes effective methods for estimating the link metrics.
Numerical results show that the proposed protocol is a feasible solution to
addresses QoS service differenti- ation for traffic with different priorities.Comment: 13 page
Study of a Dynamic Cooperative Trading Queue Routing Control Scheme for Freeways and Facilities with Parallel Queues
This article explores the coalitional stability of a new cooperative control
policy for freeways and parallel queuing facilities with multiple servers.
Based on predicted future delays per queue or lane, a VOT-heterogeneous
population of agents can agree to switch lanes or queues and transfer payments
to each other in order to minimize the total cost of the incoming platoon. The
strategic interaction is captured by an n-level Stackelberg model with
coalitions, while the cooperative structure is formulated as a partition
function game (PFG). The stability concept explored is the strong-core for PFGs
which we found appropiate given the nature of the problem. This concept ensures
that the efficient allocation is individually rational and coalitionally
stable. We analyze this control mechanism for two settings: a static vertical
queue and a dynamic horizontal queue. For the former, we first characterize the
properties of the underlying cooperative game. Our simulation results suggest
that the setting is always strong-core stable. For the latter, we propose a new
relaxation program for the strong-core concept. Our simulation results on a
freeway bottleneck with constant outflow using Newell's car-following model
show the imputations to be generally strong-core stable and the coalitional
instabilities to remain small with regard to users' costs.Comment: 3 figures. Presented at Annual Meeting Transportation Research Board
2018, Washington DC. Proof of conjecture 1 pendin
An acceleration simulation method for power law priority traffic
A method for accelerated simulation for simulated self-similar processes is proposed. This technique simplifies
the simulation model and improves the efficiency by using excess packets instead of packet-by-packet source traffic for a FIFO and non-FIFO buffer scheduler. In this research is focusing on developing an equivalent model of the conventional packet buffer that can produce an output analysis (which in this case will be the steady state probability) much faster. This acceleration simulation method is a further development of the Traffic Aggregation technique, which had previously been applied to FIFO buffers only and applies the Generalized Ballot Theorem to calculate the waiting time for the low priority traffic (combined with prior work on traffic aggregation). This hybrid method is shown to provide a significant reduction in the process time, while maintaining queuing behavior in the buffer that is highly accurate when compared to results from a conventional simulatio
- …