14,860 research outputs found
On the Reliability of LTE Random Access: Performance Bounds for Machine-to-Machine Burst Resolution Time
Random Access Channel (RACH) has been identified as one of the major
bottlenecks for accommodating massive number of machine-to-machine (M2M) users
in LTE networks, especially for the case of burst arrival of connection
requests. As a consequence, the burst resolution problem has sparked a large
number of works in the area, analyzing and optimizing the average performance
of RACH. However, the understanding of what are the probabilistic performance
limits of RACH is still missing. To address this limitation, in the paper, we
investigate the reliability of RACH with access class barring (ACB). We model
RACH as a queuing system, and apply stochastic network calculus to derive
probabilistic performance bounds for burst resolution time, i.e., the worst
case time it takes to connect a burst of M2M devices to the base station. We
illustrate the accuracy of the proposed methodology and its potential
applications in performance assessment and system dimensioning.Comment: Presented at IEEE International Conference on Communications (ICC),
201
A predefined channel coefficients library for vehicle-to-vehicle communications
It is noticeable that most of VANETs communications tests are assessed through simulation. In a majority of simulation results, the physical layer is often affected by an apparent lack of realism. Therefore, vehicular channel model has become a critical issue in the field of intelligent transport systems (ITS). To overcome the lack of realism problem, a more robust channel model is needed to reflect the reality. This paper provides an open access, predefined channel coefficients library. The library is based on 2x2 and 4x4 Multiple – Input – Multiple – Output (MIMO) systems in V2V communications, using a spatial channel model extended SCME which will help to reduce the overall simulation time. In addition, it provides a more realistic channel model for V2V communications; considering: over ranges of speeds, distances, multipath signals, sub-path signals, different angle of arrivals, different angle departures, no line of sight and line of sight. An intensive evaluation process has taken place to validate the library and acceptance results are produced. Having an open access predefined library, enables the researcher at relevant communities to test and evaluate several complicated vehicular communications scenarios in a wider manners with less time and efforts
Efficient Parallel Statistical Model Checking of Biochemical Networks
We consider the problem of verifying stochastic models of biochemical
networks against behavioral properties expressed in temporal logic terms. Exact
probabilistic verification approaches such as, for example, CSL/PCTL model
checking, are undermined by a huge computational demand which rule them out for
most real case studies. Less demanding approaches, such as statistical model
checking, estimate the likelihood that a property is satisfied by sampling
executions out of the stochastic model. We propose a methodology for
efficiently estimating the likelihood that a LTL property P holds of a
stochastic model of a biochemical network. As with other statistical
verification techniques, the methodology we propose uses a stochastic
simulation algorithm for generating execution samples, however there are three
key aspects that improve the efficiency: first, the sample generation is driven
by on-the-fly verification of P which results in optimal overall simulation
time. Second, the confidence interval estimation for the probability of P to
hold is based on an efficient variant of the Wilson method which ensures a
faster convergence. Third, the whole methodology is designed according to a
parallel fashion and a prototype software tool has been implemented that
performs the sampling/verification process in parallel over an HPC
architecture
Practical issues for the implementation of survivability and recovery techniques in optical networks
The Quest for Scalability and Accuracy in the Simulation of the Internet of Things: an Approach based on Multi-Level Simulation
This paper presents a methodology for simulating the Internet of Things (IoT)
using multi-level simulation models. With respect to conventional simulators,
this approach allows us to tune the level of detail of different parts of the
model without compromising the scalability of the simulation. As a use case, we
have developed a two-level simulator to study the deployment of smart services
over rural territories. The higher level is base on a coarse grained,
agent-based adaptive parallel and distributed simulator. When needed, this
simulator spawns OMNeT++ model instances to evaluate in more detail the issues
concerned with wireless communications in restricted areas of the simulated
world. The performance evaluation confirms the viability of multi-level
simulations for IoT environments.Comment: Proceedings of the IEEE/ACM International Symposium on Distributed
Simulation and Real Time Applications (DS-RT 2017
Quantile-based optimization under uncertainties using adaptive Kriging surrogate models
Uncertainties are inherent to real-world systems. Taking them into account is
crucial in industrial design problems and this might be achieved through
reliability-based design optimization (RBDO) techniques. In this paper, we
propose a quantile-based approach to solve RBDO problems. We first transform
the safety constraints usually formulated as admissible probabilities of
failure into constraints on quantiles of the performance criteria. In this
formulation, the quantile level controls the degree of conservatism of the
design. Starting with the premise that industrial applications often involve
high-fidelity and time-consuming computational models, the proposed approach
makes use of Kriging surrogate models (a.k.a. Gaussian process modeling).
Thanks to the Kriging variance (a measure of the local accuracy of the
surrogate), we derive a procedure with two stages of enrichment of the design
of computer experiments (DoE) used to construct the surrogate model. The first
stage globally reduces the Kriging epistemic uncertainty and adds points in the
vicinity of the limit-state surfaces describing the system performance to be
attained. The second stage locally checks, and if necessary, improves the
accuracy of the quantiles estimated along the optimization iterations.
Applications to three analytical examples and to the optimal design of a car
body subsystem (minimal mass under mechanical safety constraints) show the
accuracy and the remarkable efficiency brought by the proposed procedure
Process algebra for performance evaluation
This paper surveys the theoretical developments in the field of stochastic process algebras, process algebras where action occurrences may be subject to a delay that is determined by a random variable. A huge class of resource-sharing systems – like large-scale computers, client–server architectures, networks – can accurately be described using such stochastic specification formalisms. The main emphasis of this paper is the treatment of operational semantics, notions of equivalence, and (sound and complete) axiomatisations of these equivalences for different types of Markovian process algebras, where delays are governed by exponential distributions. Starting from a simple actionless algebra for describing time-homogeneous continuous-time Markov chains, we consider the integration of actions and random delays both as a single entity (like in known Markovian process algebras like TIPP, PEPA and EMPA) and as separate entities (like in the timed process algebras timed CSP and TCCS). In total we consider four related calculi and investigate their relationship to existing Markovian process algebras. We also briefly indicate how one can profit from the separation of time and actions when incorporating more general, non-Markovian distributions
- …