4,555 research outputs found
Sharp Bounds in Stochastic Network Calculus
The practicality of the stochastic network calculus (SNC) is often questioned
on grounds of potential looseness of its performance bounds. In this paper it
is uncovered that for bursty arrival processes (specifically Markov-Modulated
On-Off (MMOO)), whose amenability to \textit{per-flow} analysis is typically
proclaimed as a highlight of SNC, the bounds can unfortunately indeed be very
loose (e.g., by several orders of magnitude off). In response to this uncovered
weakness of SNC, the (Standard) per-flow bounds are herein improved by deriving
a general sample-path bound, using martingale based techniques, which
accommodates FIFO, SP, EDF, and GPS scheduling. The obtained (Martingale)
bounds gain an exponential decay factor of in
the number of flows . Moreover, numerical comparisons against simulations
show that the Martingale bounds are remarkably accurate for FIFO, SP, and EDF
scheduling; for GPS scheduling, although the Martingale bounds substantially
improve the Standard bounds, they are numerically loose, demanding for
improvements in the core SNC analysis of GPS
Towards a System Theoretic Approach to Wireless Network Capacity in Finite Time and Space
In asymptotic regimes, both in time and space (network size), the derivation
of network capacity results is grossly simplified by brushing aside queueing
behavior in non-Jackson networks. This simplifying double-limit model, however,
lends itself to conservative numerical results in finite regimes. To properly
account for queueing behavior beyond a simple calculus based on average rates,
we advocate a system theoretic methodology for the capacity problem in finite
time and space regimes. This methodology also accounts for spatial correlations
arising in networks with CSMA/CA scheduling and it delivers rigorous
closed-form capacity results in terms of probability distributions. Unlike
numerous existing asymptotic results, subject to anecdotal practical concerns,
our transient one can be used in practical settings: for example, to compute
the time scales at which multi-hop routing is more advantageous than single-hop
routing
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: • The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
• The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
• The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
• The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
Four lectures on probabilistic methods for data science
Methods of high-dimensional probability play a central role in applications
for statistics, signal processing theoretical computer science and related
fields. These lectures present a sample of particularly useful tools of
high-dimensional probability, focusing on the classical and matrix Bernstein's
inequality and the uniform matrix deviation inequality. We illustrate these
tools with applications for dimension reduction, network analysis, covariance
estimation, matrix completion and sparse signal recovery. The lectures are
geared towards beginning graduate students who have taken a rigorous course in
probability but may not have any experience in data science applications.Comment: Lectures given at 2016 PCMI Graduate Summer School in Mathematics of
Data. Some typos, inaccuracies fixe
- …