336 research outputs found
Stochastic Analysis of Power-Aware Scheduling
Energy consumption in a computer system can be reduced by dynamic speed scaling, which adapts the processing speed to the current load. This paper studies the optimal way to adjust speed to balance mean response time and mean energy consumption, when jobs arrive as a Poisson process and processor sharing scheduling is used. Both bounds and asymptotics for the optimal speeds are provided. Interestingly, a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, dynamic speed scaling which allocates a higher speed when more jobs are present significantly improves robustness to bursty traffic and mis-estimation of workload parameters
Power-Aware Speed Scaling in Processor Sharing Systems
Energy use of computer communication systems has quickly become a vital design consideration. One effective method for reducing energy consumption is dynamic speed scaling, which adapts the processing speed to the current load. This paper studies how to optimally scale speed to balance mean response time and mean energy consumption under processor sharing scheduling. Both bounds and asymptotics for the optimal speed scaling scheme are provided. These results show that a simple scheme that halts when the system is idle and uses a static rate while the system is busy provides nearly the same performance as the optimal dynamic speed scaling. However, the results also highlight that dynamic speed scaling provides at least one key benefit - significantly improved robustness to bursty traffic and mis-estimation of workload parameters
The age of information in gossip networks
We introduce models of gossip based communication networks in which each node
is simultaneously a sensor, a relay and a user of information. We model the
status of ages of information between nodes as a discrete time Markov chain. In
this setting a gossip transmission policy is a decision made at each node
regarding what type of information to relay at any given time (if any). When
transmission policies are based on random decisions, we are able to analyze the
age of information in certain illustrative structured examples either by means
of an explicit analysis, an algorithm or asymptotic approximations. Our key
contribution is presenting this class of models.Comment: 15 pages, 8 figure
Improving the fairness of FAST TCP to new flows
It has been observed that FAST TCP, and the related protocol TCP Vegas, suffer unfairness when many flows arrive at a single bottleneck link, without intervening departures. We show that the effect is even more marked if a new flow arrives when existing flows share bandwidth fairly, and propose a simple method to ameliorate this effect
Active Queue Management for Fair Resource Allocation in Wireless Networks
This paper investigates the interaction between end-to-end flow control and MAC-layer scheduling on wireless links. We consider a wireless network with multiple users receiving information from a common access point; each user suffers fading, and a scheduler allocates the channel based on channel quality,but subject to fairness and latency considerations. We show that the fairness property of the scheduler is compromised by the transport layer flow control of TCP New Reno. We provide a receiver-side control algorithm, CLAMP, that remedies this situation. CLAMP works at a receiver to control a TCP sender by setting the TCP receiver's advertised window limit, and this allows the scheduler to allocate bandwidth fairly between the users
Online Algorithms for Geographical Load Balancing
It has recently been proposed that Internet energy costs, both monetary and environmental, can be reduced by exploiting temporal variations and shifting processing to data centers located in regions where energy currently has low cost. Lightly loaded data centers can then turn off surplus servers. This paper studies online algorithms for determining the number of servers to leave on in each data center, and then uses these algorithms to study the environmental potential of geographical load balancing (GLB). A commonly suggested algorithm for this setting is “receding horizon control” (RHC), which computes the provisioning for the current time by optimizing over a window of predicted future loads. We show that RHC performs well in a homogeneous setting, in which all servers can serve all jobs equally well; however, we also prove that differences in propagation delays, servers, and electricity prices can cause RHC perform badly, So, we introduce variants of RHC that are guaranteed to perform as well in the face of such heterogeneity. These algorithms are then used to study the feasibility of powering a continent-wide set of data centers mostly by renewable sources, and to understand what portfolio of renewable energy is most effective
Sizes of Minimum Connected Dominating Sets of a Class of Wireless Sensor Networks
We consider an important performance measure of wireless sensor networks, namely, the least number of nodes, N, required to facilitate routing between any pair of nodes, allowing other nodes to remain in sleep mode in order to conserve energy. We derive the expected value and the distribution of N for single dimensional dense networks
File Fragmentation over an Unreliable Channel
It has been recently discovered that heavy-tailed
file completion time can result from protocol interaction even
when file sizes are light-tailed. A key to this phenomenon is
the RESTART feature where if a file transfer is interrupted
before it is completed, the transfer needs to restart from the
beginning. In this paper, we show that independent or bounded
fragmentation guarantees light-tailed file completion time as long
as the file size is light-tailed, i.e., in this case, heavy-tailed file
completion time can only originate from heavy-tailed file sizes.
If the file size is heavy-tailed, then the file completion time is
necessarily heavy-tailed. For this case, we show that when the
file size distribution is regularly varying, then under independent
or bounded fragmentation, the completion time tail distribution
function is asymptotically upper bounded by that of the original
file size stretched by a constant factor. We then prove that if the
failure distribution has non-decreasing failure rate, the expected
completion time is minimized by dividing the file into equal sized
fragments; this optimal fragment size is unique but depends on
the file size. We also present a simple blind fragmentation policy
where the fragment sizes are constant and independent of the
file size and prove that it is asymptotically optimal. Finally, we
bound the error in expected completion time due to error in
modeling of the failure process
Performance of Electropun Polyacrylonitrile Nanofibrous Phases, Shown for the Separation of Water-Soluble Food Dyes via UTLC-Vis-ESI-MS
Research in the miniaturization of planar chromatography led to various approaches in manufacturing ultrathin-layer chromatography (UTLC) layers of reduced thickness (<50 µm) along with smaller instrumentation, as targeted in Office Chromatography. This novel concept merges 3D print & media technologies with miniaturized planar chromatography to realize an all-in-one instrument, in which all steps of UTLC are automated and integrated in the same tiny device. In this context, the development of electrospun polyacrylonitrile (PAN) nanofiber phases was investigated as well as its performance. A nanofibrous stationary phase with fiber diameters of 150225 nm and a thickness of ca. 25 µm was manufactured. Mixtures of water-soluble food dyes were printed on it using a modified office printer, and successfully separated to illustrate the capabilities of such UTLC media. The separation took 8 min for 30 mm and was faster (up to a factor of 2) than on particulate layers. The mean hRF values ranging from 25 to 90 for the five food dyes were well spread over the migration distance, with an overall reproducibility of 7% (mean %RSD over 5 different plates for 5 dyes). The individual mean plate numbers over 5 plates ranged between 8286 and 22,885 (mean of 11,722 over all 5 dyes). The single mean resolutions RS were between 1.7 and 6.5 (for the 5 food dyes over 5 plates), with highly satisfying reproducibilities (0.3 as mean deviation of RS). Using videodensitometry, different amounts separated in parallel led to reliable linear calibrations for each dye (sdv of 3.19.1% for peak heights and 2.49.3% for peak areas). Coupling to mass spectrometry via an elution head-based interface was successfully demonstrated for such ultrathin layers, showing several advantages such as a reduced cleaning process and a minimum zone distance. All these results underline the potential of electrospun nanofibrous phases to succeed as affordable stationary phase for quantitative UTLC
- …