1,714 research outputs found
Statistical Delay Bound for WirelessHART Networks
In this paper we provide a performance analysis framework for wireless
industrial networks by deriving a service curve and a bound on the delay
violation probability. For this purpose we use the (min,x) stochastic network
calculus as well as a recently presented recursive formula for an end-to-end
delay bound of wireless heterogeneous networks. The derived results are mapped
to WirelessHART networks used in process automation and were validated via
simulations. In addition to WirelessHART, our results can be applied to any
wireless network whose physical layer conforms the IEEE 802.15.4 standard,
while its MAC protocol incorporates TDMA and channel hopping, like e.g.
ISA100.11a or TSCH-based networks. The provided delay analysis is especially
useful during the network design phase, offering further research potential
towards optimal routing and power management in QoS-constrained wireless
industrial networks.Comment: Accepted at PE-WASUN 201
Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?
This paper investigates two strategies to reduce the communication delay in
future wireless networks: traffic dispersion and network densification. A
hybrid scheme that combines these two strategies is also considered. The
probabilistic delay and effective capacity are used to evaluate performance.
For probabilistic delay, the violation probability of delay, i.e., the
probability that the delay exceeds a given tolerance level, is characterized in
terms of upper bounds, which are derived by applying stochastic network
calculus theory. In addition, to characterize the maximum affordable arrival
traffic for mmWave systems, the effective capacity, i.e., the service
capability with a given quality-of-service (QoS) requirement, is studied. The
derived bounds on the probabilistic delay and effective capacity are validated
through simulations. These numerical results show that, for a given average
system gain, traffic dispersion, network densification, and the hybrid scheme
exhibit different potentials to reduce the end-to-end communication delay. For
instance, traffic dispersion outperforms network densification, given high
average system gain and arrival rate, while it could be the worst option,
otherwise. Furthermore, it is revealed that, increasing the number of
independent paths and/or relay density is always beneficial, while the
performance gain is related to the arrival rate and average system gain,
jointly. Therefore, a proper transmission scheme should be selected to optimize
the delay performance, according to the given conditions on arrival traffic and
system service capability
On the Reliability of LTE Random Access: Performance Bounds for Machine-to-Machine Burst Resolution Time
Random Access Channel (RACH) has been identified as one of the major
bottlenecks for accommodating massive number of machine-to-machine (M2M) users
in LTE networks, especially for the case of burst arrival of connection
requests. As a consequence, the burst resolution problem has sparked a large
number of works in the area, analyzing and optimizing the average performance
of RACH. However, the understanding of what are the probabilistic performance
limits of RACH is still missing. To address this limitation, in the paper, we
investigate the reliability of RACH with access class barring (ACB). We model
RACH as a queuing system, and apply stochastic network calculus to derive
probabilistic performance bounds for burst resolution time, i.e., the worst
case time it takes to connect a burst of M2M devices to the base station. We
illustrate the accuracy of the proposed methodology and its potential
applications in performance assessment and system dimensioning.Comment: Presented at IEEE International Conference on Communications (ICC),
201
Network on Chip: a New Approach of QoS Metric Modeling Based on Calculus Theory
A NoC is composed by IP cores (Intellectual Propriety) and switches connected
among themselves by communication channels. End-to-End Delay (EED)
communication is accomplished by the exchange of data among IP cores. Often,
the structure of particular messages is not adequate for the communication
purposes. This leads to the concept of packet switching. In the context of
NoCs, packets are composed by header, payload, and trailer. Packets are divided
into small pieces called Flits. It appears of importance, to meet the required
performance in NoC hardware resources. It should be specified in an earlier
step of the system design. The main attention should be given to the choice of
some network parameters such as the physical buffer size in the node. The EED
and packet loss are some of the critical QoS metrics. Some real-time and
multimedia applications bound up these parameters and require specific hardware
resources and particular management approaches in the NoC switch. A traffic
contract (SLA, Service Level Agreement) specifies the ability of a network or
protocol to give guaranteed performance, throughput or latency bounds based on
mutually agreed measures, usually by prioritizing traffic. A defined Quality of
Service (QoS) may be required for some types of network real time traffic or
multimedia applications. The main goal of this paper is, using the Network on
Chip modeling architecture, to define a QoS metric. We focus on the network
delay bound and packet losses. This approach is based on the Network Calculus
theory, a mathematical model to represent the data flows behavior between IPs
interconnected over NoC. We propose an approach of QoS-metric based on
QoS-parameter prioritization factors for multi applications-service using
calculus model
Backlog and Delay Reasoning in HARQ Systems
Recently, hybrid-automatic-repeat-request (HARQ) systems have been favored in
particular state-of-the-art communications systems since they provide the
practicality of error detections and corrections aligned with repeat-requests
when needed at receivers. The queueing characteristics of these systems have
taken considerable focus since the current technology demands data
transmissions with a minimum delay provisioning. In this paper, we investigate
the effects of physical layer characteristics on data link layer performance in
a general class of HARQ systems. Constructing a state transition model that
combines queue activity at a transmitter and decoding efficiency at a receiver,
we identify the probability of clearing the queue at the transmitter and the
packet-loss probability at the receiver. We determine the effective capacity
that yields the maximum feasible data arrival rate at the queue under
quality-of-service constraints. In addition, we put forward non-asymptotic
backlog and delay bounds. Finally, regarding three different HARQ protocols,
namely Type-I HARQ, HARQ-chase combining (HARQ-CC) and HARQ-incremental
redundancy (HARQ-IR), we show the superiority of HARQ-IR in delay robustness
over the others. However, we further observe that the performance gap between
HARQ-CC and HARQ-IR is quite negligible in certain cases. The novelty of our
paper is a general cross-layer analysis of these systems, considering
encoding/decoding in the physical layer and delay aspects in the data-link
layer
Service discovery and negotiation with COWS
To provide formal foundations to current (web) services technologies, we put forward using COWS, a process calculus for specifying, combining and analysing services, as a uniform formalism for modelling all the relevant phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, deployment and execution. In this paper, we show that constraints and operations on them can be smoothly incorporated in COWS, and propose a disciplined way to model multisets of constraints and to manipulate them through appropriate interaction protocols. Therefore, we demonstrate that also QoS requirement specifications and SLA achievements, and the phases of dynamic service discovery and negotiation can be comfortably modelled in COWS. We illustrate our approach through a scenario for a service-based web hosting provider
How user throughput depends on the traffic demand in large cellular networks
Little's law allows to express the mean user throughput in any region of the
network as the ratio of the mean traffic demand to the steady-state mean number
of users in this region. Corresponding statistics are usually collected in
operational networks for each cell. Using ergodic arguments and Palm theoretic
formalism, we show that the global mean user throughput in the network is equal
to the ratio of these two means in the steady state of the "typical cell".
Here, both means account for double averaging: over time and network geometry,
and can be related to the per-surface traffic demand, base-station density and
the spatial distribution of the SINR. This latter accounts for network
irregularities, shadowing and idling cells via cell-load equations. We validate
our approach comparing analytical and simulation results for Poisson network
model to real-network cell-measurements
- …