1,577 research outputs found
Elastic calls in an integrated services network: the greater the call size variability the better the QoS
We study a telecommunications network integrating prioritized stream calls and delay tolerant elastic calls that are served with the remaining (varying) service capacity according to a processor sharing discipline. The remarkable observation is presented and analytically supported that the expected elastic call holding time is decreasing in the variability of the elastic call size distribution. As a consequence, network planning guidelines or admission control schemes that are developed based on deterministic or lightly variable elastic call sizes are likely to be conservative and inefficient, given the commonly acknowledged property of e.g.\ \textsc{www}\ documents to be heavy tailed. Application areas of the model and results include fixed \textsc{ip} or \textsc{atm} networks and mobile cellular \textsc{gsm}/\textsc{gprs} and \textsc{umts} networks. \u
Revisiting an old friend: On the observability of the relation between Long Range Dependence and Heavy Tail
International audienceTaqqu's Theorem plays a fundamental role in Internet traffic modeling, for two reasons: First, its theoretical formulation matches closely and in a meaningful manner some of the key network mechanisms controlling traffic characteristics; Second, it offers a plau- sible explanation for the origin of the long range dependence property in relation with the heavy tail nature of the traffic components. Numerous attempts have since been made to observe its predictions empirically, either from real Internet traffic data or from numerical simulations based on popular traffic models, yet rarely has this resulted in a satisfactory quantitative agreement. This raised in the literature a number of comments and questions, ranging from the adequacy of the theorem to real world data to the relevance of the statistical tools involved in practical analyses. The present contribution aims at studying under which conditions this fundamental theorem can be actually seen at work on real or simulated data. To do so, numerical simulations based on standard traffic models are analyzed in a wavelet framework. The key time scales involved are derived, enabling a discussion of the origin and nature of the difficulties encountered in attempts to empirically observe Taqqu's Theorem
Avon Park Air Force Range project: distribution and abundance of sensitive wildlife species at Avone Park Force Range
Executive Summary. We surveyed for seven species ofsensitve wildlife (Florida gopher frogs,
gopher tortoise, eastern indigo snake, eastern diamondback rattlesnake, Florida mouse, Florida
roundtail muskrat, Sherman's fox squirrel) between October 1996 and May 1998 at Avon Park
Air Force Range (APR). The presence of 87 other species ofamphibians, reptiles, and mammals
also were detected. Selected species ofbirds were noted, particularly if they were found dead on
APR roads. We recorded nine new county records ofamphibians and reptiles from Polk and
Highlands counties, based on range maps presented in Ashton and Ashton (1981, 1985, 1988).
We discuss a biogeographic model based on the vertebrates recorded from APR, the Lake Wales
Ridge, and the low dune region along SR 64 to explain some of the distributional anomalies
associated with the Bombing Range Ridge and vicinity. (199 page document
Recommended from our members
Large deviations analysis of scheduling policies for a web server
With increasing demand and availability of bandwidth resources, there has been tremendous
growth in the scale and speed of web servers. In web servers, scheduling plays an important
role in resource allocation (for instance, bandwidth allocation, processor allocation,
etc). However, as the scale of a system increases so does the number of activities/events
in the system (e.g., job arrivals), as a consequence of which the analysis of scheduling
becomes increasingly harder. In particular, the possible ways in which scheduling failure
(e.g., queue overflow, excessively large delay, instability of a system) can occur becomes
increasingly greater, thus making it more difficult to understand the behavior and develop
design rules for scheduling algorithms. However, a well-known observation from large devi
viations theory that large scale systems fails in a “most likely way” can potentially be used
to simplify the design and analysis of scheduling. In this thesis, we study the implications
and applications of this effect on scheduling in a web server accessed by a large number of
sources.
We analyze the delay distribution of scheduling policies for web servers under a
many sources large deviation regime which models web servers in a large scale system
well. Due to the difficulties brought on considering a large number of sources, only a small
number of scheduling policies, such as First-Come-First-Serve (FCFS), General-ProcessorSharing
(GPS), and Priority Queueing policies have been analyzed under the many sources
regime. In particular, in a single queue single server setup the delay characteristics of only
FCFS, Shortest-Job-First (SJF), and Longest-Job-First (LJF) has been analyzed.
In this thesis, we study the Two-Dimensional-Queueing (2DQ) framework, a unifying
queueing framework that allows the identification of the “most likely way” in which
delay occurs, to analyze the delay of various unexplored scheduling policies. In conjunction
with the 2DQ framework, we develop a new “cycle based” technique for understanding the
large deviations tail probability of more complex policies.
Using the combination of the 2DQ framework and the cycle based analysis, we
first analyze two interesting scheduling policies, i.e., Shortest-Remaining-Processing-Time
(SRPT) policy (which is mean delay optimal) and Processer-Sharing (PS) policy (which is a
“fair” policy). We derive the asymptotic delay distributions (rate functions) of both policies
and study their behavior across job sizes. Next, we address three problems in implementing
the aforementioned scheduling policies: (i) end receivers may have bandwidth constraints
that are not taken account in SRPT, (ii) the remaining processing time information might
not be available to the web-server, and (iii) most actual implementations are variants of
SRPT to reflect other implementation constraints and/or to jointly optimize other metrics
in addition to delay, i.e., jitter, fairness, etc. To address these, we first develop finite-SRPT
that takes into account the bandwidth constraint at the end receiver, and show that the policy
shifts between SRPT and a PS-like policy depending on the bandwidth constraint. Second,
we study the Least-Attained-Service (LAS) policy which is viewed as a good substitute
for SRPT when the remaining job size is not available and we analyze the penalty associated
with not using the remaining size information directly. Lastly, we analyze a class of
scheduling policies known as SMART that contains many variants of SRPT with different
fairness properties and show that all policies in the class have the same tail probability of
delay across job sizes for a many sources regime. The results of this thesis facilitate the
understanding of various scheduling policies under the many sources regime and provides
an analytical queueing framework that can be used to understand other scheduling policies.Electrical and Computer Engineerin
User evaluation of ride technology research
The 23 organizations queried represent government, carrier, and manufacturing interests in air, marine, rail, and surface transportation systems. Results indicate a strong need for common terminology and data analysis/reporting techniques. The various types of ride criteria currently in use are discussed, particularly in terms of their respective data base requirements. A plan of action is proposed for fulfilling the ride technology needs identified by this study
Scalable high-capacity high-fan-out optical networks for constrained environments
The investigations carried out as part of the dissertation address the architecture and application of optical access networks pertaining to high-capacity and high fan-out applications such as in-flight entertainment (IFE) and video-gaming environment. High-capacity and high-fan-out optical networks have a multitude of applications such as expo-centers, train area networks (TAN), video gaming competitions and other applications that require large number of connected users. For the purpose of keeping the scope of the dissertation within limit however, we have concentrated this work on IFE systems. IFE systems present unique challenges at physical and application layers alike. In-flight entertainment (IFE) systems have been a part of passengers' experience for a while now. Currently available systems can be considered a bare-bone at best due to lack of adequate performance and support infrastructure. According to electronic arts (EA), one of the largest developers of video games in the world, an increase in demand for electronically distributed video games will exceed boxed games in just a matter of few years. This also shows a shifting trend towards the electronic distribution of video game content as opposed to physical distribution.
Against the same backdrop, the dissertation project involved defining a novel system architecture and capacity based on the requirements for development of novel physical layer architecture utilizing optical networks for high-speed and high-fan-out distribution of content. At the physical layer of the stacked communication model a novel high-fan-out optical network was proposed and simulated for high data-rates. Having defined the physical layer, protocol stack was identified through rigorous observations and data traffic analysis from a large set of traffic traces obtained from various sources in order to understand the distribution and behavior of video game related traffic compared with regular internet traffic. Data requirements were laid down based on analysis keeping in mind that bandwidth requirements are increasing at a tremendous pace and that the network should be able to support future high-definition and 3D gaming as well. Based on the data analysis, analytical models and latency analysis models were also developed for bandwidth allocation in the high-fan-out network architectures. Analytical modeling gives an insight into the performance of the technique as a function of incoming traffic whereas latency analysis exposes the delay factors involved in running the technique over time. "State-full bandwidth allocation" (SBA) was proposed as part of the network layer design for upstream transmission. The novel technique involves keeping state information from previous states for future allocation.
The results show that the proposed high-fan-out high-capacity physical layer architecture can be used to distribute video-gaming related content. Also, latency analysis and design and development of a novel SBA algorithm were carried out. Results were quiet promising, in that; a large number of users can be supported on the same single channel network. SBA criteria can be applied to multi-channel networks such as the physical architecture proposed / simulated and investigated in this project. In summary, the project involved design of a novel physical layer; network layer and protocol stack of the communication model and verification by simulations and mathematical modeling while adhering to application layer requirements
- …