2,763 research outputs found
Smart Dimensioning of IP Network Links
Link dimensioning is generally considered as an effective and (operationally) simple mechanism to meet (given) performance requirements. In practice, the required link capacity C is often estimated by rules of thumb, such as C = d·M, where M is the (envisaged) average traffic rate, and d some (empirically determined) constant larger than 1. This paper studies the viability of this class of ‘simplistic’ dimensioning rules. Throughout, the performance criterion imposed is that the fraction of intervals of length T in which the input exceeds the vailable output capacity (i.e., CT) should not exceed , for given T and .\ud
We first present a dimensioning formula that expresses the required link capacity as a function of M and a variance term V(T), which captures the burstiness on timescale T. We explain how M and V(T) can be estimated with low measurement effort. The dimensioning formula is then used to validate dimensioning rules of the type C = d·M. Our main findings are: (i) the factor d is strongly affected by the nature of the traffic, the level of aggregation, and the network infrastructure; if these conditions are more or less constant, one could empirically determine d; (ii) we can explicitly characterize how d is affected by the ‘performance parameters’, i.e., T and
Resource dimensioning through buffer sampling
Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship between the traffic offered (in terms of the mean offered load M, but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulae that estimate the required capacity C as a function of the input traffic and the performance target. For the special case of Gaussian input traffic these formulae reduce to C = M+V , where directly relates to the performance requirement (as agreed upon in a service level agreement) and V reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level the Gaussianity assumption is justified.\ud
As estimating M is relatively straightforward, the remaining open issue concerns the estimation of V . We argue that, particularly if V corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of V is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate
Reduction of dynamical biochemical reaction networks in computational biology
Biochemical networks are used in computational biology, to model the static
and dynamical details of systems involved in cell signaling, metabolism, and
regulation of gene expression. Parametric and structural uncertainty, as well
as combinatorial explosion are strong obstacles against analyzing the dynamics
of large models of this type. Multi-scaleness is another property of these
networks, that can be used to get past some of these obstacles. Networks with
many well separated time scales, can be reduced to simpler networks, in a way
that depends only on the orders of magnitude and not on the exact values of the
kinetic parameters. The main idea used for such robust simplifications of
networks is the concept of dominance among model elements, allowing
hierarchical organization of these elements according to their effects on the
network dynamics. This concept finds a natural formulation in tropical
geometry. We revisit, in the light of these new ideas, the main approaches to
model reduction of reaction networks, such as quasi-steady state and
quasi-equilibrium approximations, and provide practical recipes for model
reduction of linear and nonlinear networks. We also discuss the application of
model reduction to backward pruning machine learning techniques
The Chameleon Architecture for Streaming DSP Applications
We focus on architectures for streaming DSP applications such as wireless baseband processing and image processing. We aim at a single generic architecture that is capable of dealing with different DSP applications. This architecture has to be energy efficient and fault tolerant. We introduce a heterogeneous tiled architecture and present the details of a domain-specific reconfigurable tile processor called Montium. This reconfigurable processor has a small footprint (1.8 mm in a 130 nm process), is power efficient and exploits the locality of reference principle. Reconfiguring the device is very fast, for example, loading the coefficients for a 200 tap FIR filter is done within 80 clock cycles. The tiles on the tiled architecture are connected to a Network-on-Chip (NoC) via a network interface (NI). Two NoCs have been developed: a packet-switched and a circuit-switched version. Both provide two types of services: guaranteed throughput (GT) and best effort (BE). For both NoCs estimates of power consumption are presented. The NI synchronizes data transfers, configures and starts/stops the tile processor. For dynamically mapping applications onto the tiled architecture, we introduce a run-time mapping tool
A Survey of Green Networking Research
Reduction of unnecessary energy consumption is becoming a major concern in
wired networking, because of the potential economical benefits and of its
expected environmental impact. These issues, usually referred to as "green
networking", relate to embedding energy-awareness in the design, in the devices
and in the protocols of networks. In this work, we first formulate a more
precise definition of the "green" attribute. We furthermore identify a few
paradigms that are the key enablers of energy-aware networking research. We
then overview the current state of the art and provide a taxonomy of the
relevant work, with a special focus on wired networking. At a high level, we
identify four branches of green networking research that stem from different
observations on the root causes of energy waste, namely (i) Adaptive Link Rate,
(ii) Interface proxying, (iii) Energy-aware infrastructures and (iv)
Energy-aware applications. In this work, we do not only explore specific
proposals pertaining to each of the above branches, but also offer a
perspective for research.Comment: Index Terms: Green Networking; Wired Networks; Adaptive Link Rate;
Interface Proxying; Energy-aware Infrastructures; Energy-aware Applications.
18 pages, 6 figures, 2 table
Resource dimensioning through buffer sampling
Link dimensioning, i.e., selecting a (minimal) link capacity such that the users’ performance requirements are met, is a crucial component of network design. It requires insight into the interrelationship among the traffic offered (in terms of the mean offered load , but also its fluctuation around the mean, i.e., ‘burstiness’), the envisioned performance level, and the capacity needed. We first derive, for different performance criteria, theoretical dimensioning formulas that estimate the required capacity as a function of the input traffic and the performance target. For the special case of Gaussian input traffic, these formulas reduce to , where directly relates to the performance requirement (as agreed upon in a service level agreement) and reflects the burstiness (at the timescale of interest). We also observe that Gaussianity applies for virtually all realistic scenarios; notably, already for a relatively low aggregation level, the Gaussianity assumption is justified.\ud
As estimating is relatively straightforward, the remaining open issue concerns the estimation of . We argue that particularly if corresponds to small time-scales, it may be inaccurate to estimate it directly from the traffic traces. Therefore, we propose an indirect method that samples the buffer content, estimates the buffer content distribution, and ‘inverts’ this to the variance. We validate the inversion through extensive numerical experiments (using a sizeable collection of traffic traces from various representative locations); the resulting estimate of is then inserted in the dimensioning formula. These experiments show that both the inversion and the dimensioning formula are remarkably accurate
Astrobiological Complexity with Probabilistic Cellular Automata
Search for extraterrestrial life and intelligence constitutes one of the
major endeavors in science, but has yet been quantitatively modeled only rarely
and in a cursory and superficial fashion. We argue that probabilistic cellular
automata (PCA) represent the best quantitative framework for modeling
astrobiological history of the Milky Way and its Galactic Habitable Zone. The
relevant astrobiological parameters are to be modeled as the elements of the
input probability matrix for the PCA kernel. With the underlying simplicity of
the cellular automata constructs, this approach enables a quick analysis of
large and ambiguous input parameters' space. We perform a simple clustering
analysis of typical astrobiological histories and discuss the relevant boundary
conditions of practical importance for planning and guiding actual empirical
astrobiological and SETI projects. In addition to showing how the present
framework is adaptable to more complex situations and updated observational
databases from current and near-future space missions, we demonstrate how
numerical results could offer a cautious rationale for continuation of
practical SETI searches.Comment: 37 pages, 11 figures, 2 tables; added journal reference belo
Adaptive Replication in Distributed Content Delivery Networks
We address the problem of content replication in large distributed content
delivery networks, composed of a data center assisted by many small servers
with limited capabilities and located at the edge of the network. The objective
is to optimize the placement of contents on the servers to offload as much as
possible the data center. We model the system constituted by the small servers
as a loss network, each loss corresponding to a request to the data center.
Based on large system / storage behavior, we obtain an asymptotic formula for
the optimal replication of contents and propose adaptive schemes related to
those encountered in cache networks but reacting here to loss events, and
faster algorithms generating virtual events at higher rate while keeping the
same target replication. We show through simulations that our adaptive schemes
outperform significantly standard replication strategies both in terms of loss
rates and adaptation speed.Comment: 10 pages, 5 figure
- …