5,703 research outputs found
Performance analysis of downlink shared channels in a UMTS network
In light of the expected growth in wireless data communications and the commonly anticipated up/downlink asymmetry, we present a performance analysis of downlink data transfer over \textsc{d}ownlink \textsc{s}hared \textsc{ch}annels (\textsc{dsch}s), arguably the most efficient \textsc{umts} transport channel for medium-to-large data transfers. It is our objective to provide qualitative insight in the different aspects that influence the data \textsc{q}uality \textsc{o}f \textsc{s}ervice (\textsc{qos}). As a most principal factor, the data traffic load affects the data \textsc{qos} in two distinct manners: {\em (i)} a heavier data traffic load implies a greater competition for \textsc{dsch} resources and thus longer transfer delays; and {\em (ii)} since each data call served on a \textsc{dsch} must maintain an \textsc{a}ssociated \textsc{d}edicated \textsc{ch}annel (\textsc{a}-\textsc{dch}) for signalling purposes, a heavier data traffic load implies a higher interference level, a higher frame error rate and thus a lower effective aggregate \textsc{dsch} throughput: {\em the greater the demand for service, the smaller the aggregate service capacity.} The latter effect is further amplified in a multicellular scenario, where a \textsc{dsch} experiences additional interference from the \textsc{dsch}s and \textsc{a}-\textsc{dch}s in surrounding cells, causing a further degradation of its effective throughput. Following an insightful two-stage performance evaluation approach, which segregates the interference aspects from the traffic dynamics, a set of numerical experiments is executed in order to demonstrate these effects and obtain qualitative insight in the impact of various system aspects on the data \textsc{qos}
Next Generation Cloud Computing: New Trends and Research Directions
The landscape of cloud computing has significantly changed over the last
decade. Not only have more providers and service offerings crowded the space,
but also cloud infrastructure that was traditionally limited to single provider
data centers is now evolving. In this paper, we firstly discuss the changing
cloud infrastructure and consider the use of infrastructure from multiple
providers and the benefit of decentralising computing away from data centers.
These trends have resulted in the need for a variety of new computing
architectures that will be offered by future cloud infrastructure. These
architectures are anticipated to impact areas, such as connecting people and
devices, data-intensive computing, the service space and self-learning systems.
Finally, we lay out a roadmap of challenges that will need to be addressed for
realising the potential of next generation cloud systems.Comment: Accepted to Future Generation Computer Systems, 07 September 201
Will SDN be part of 5G?
For many, this is no longer a valid question and the case is considered
settled with SDN/NFV (Software Defined Networking/Network Function
Virtualization) providing the inevitable innovation enablers solving many
outstanding management issues regarding 5G. However, given the monumental task
of softwarization of radio access network (RAN) while 5G is just around the
corner and some companies have started unveiling their 5G equipment already,
the concern is very realistic that we may only see some point solutions
involving SDN technology instead of a fully SDN-enabled RAN. This survey paper
identifies all important obstacles in the way and looks at the state of the art
of the relevant solutions. This survey is different from the previous surveys
on SDN-based RAN as it focuses on the salient problems and discusses solutions
proposed within and outside SDN literature. Our main focus is on fronthaul,
backward compatibility, supposedly disruptive nature of SDN deployment,
business cases and monetization of SDN related upgrades, latency of general
purpose processors (GPP), and additional security vulnerabilities,
softwarization brings along to the RAN. We have also provided a summary of the
architectural developments in SDN-based RAN landscape as not all work can be
covered under the focused issues. This paper provides a comprehensive survey on
the state of the art of SDN-based RAN and clearly points out the gaps in the
technology.Comment: 33 pages, 10 figure
Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks
Future wireless networks have a substantial potential in terms of supporting
a broad range of complex compelling applications both in military and civilian
fields, where the users are able to enjoy high-rate, low-latency, low-cost and
reliable information services. Achieving this ambitious goal requires new radio
techniques for adaptive learning and intelligent decision making because of the
complex heterogeneous nature of the network structures and wireless services.
Machine learning (ML) algorithms have great success in supporting big data
analytics, efficient parameter estimation and interactive decision making.
Hence, in this article, we review the thirty-year history of ML by elaborating
on supervised learning, unsupervised learning, reinforcement learning and deep
learning. Furthermore, we investigate their employment in the compelling
applications of wireless networks, including heterogeneous networks (HetNets),
cognitive radios (CR), Internet of things (IoT), machine to machine networks
(M2M), and so on. This article aims for assisting the readers in clarifying the
motivation and methodology of the various ML algorithms, so as to invoke them
for hitherto unexplored services as well as scenarios of future wireless
networks.Comment: 46 pages, 22 fig
- âŠ