1,419 research outputs found
A Survey of Anticipatory Mobile Networking: Context-Based Classification, Prediction Methodologies, and Optimization Techniques
A growing trend for information technology is to not just react to changes, but anticipate them as much as possible. This paradigm made modern solutions, such as recommendation systems, a ubiquitous presence in today's digital transactions. Anticipatory networking extends the idea to communication technologies by studying patterns and periodicity in human behavior and network dynamics to optimize network performance. This survey collects and analyzes recent papers leveraging context information to forecast the evolution of network conditions and, in turn, to improve network performance. In particular, we identify the main prediction and optimization tools adopted in this body of work and link them with objectives and constraints of the typical applications and scenarios. Finally, we consider open challenges and research directions to make anticipatory networking part of next generation networks
Prediction-based techniques for the optimization of mobile networks
MenciĂłn Internacional en el tĂtulo de doctorMobile cellular networks are complex system whose behavior is characterized by the superposition
of several random phenomena, most of which, related to human activities, such as mobility,
communications and network usage. However, when observed in their totality, the many individual
components merge into more deterministic patterns and trends start to be identifiable and
predictable.
In this thesis we analyze a recent branch of network optimization that is commonly referred to
as anticipatory networking and that entails the combination of prediction solutions and network
optimization schemes. The main intuition behind anticipatory networking is that knowing in
advance what is going on in the network can help understanding potentially severe problems and
mitigate their impact by applying solution when they are still in their initial states. Conversely,
network forecast might also indicate a future improvement in the overall network condition (i.e.
load reduction or better signal quality reported from users). In such a case, resources can be
assigned more sparingly requiring users to rely on buffered information while waiting for the
better condition when it will be more convenient to grant more resources.
In the beginning of this thesis we will survey the current anticipatory networking panorama
and the many prediction and optimization solutions proposed so far. In the main body of the work,
we will propose our novel solutions to the problem, the tools and methodologies we designed to
evaluate them and to perform a real world evaluation of our schemes.
By the end of this work it will be clear that not only is anticipatory networking a very promising
theoretical framework, but also that it is feasible and it can deliver substantial benefit to current
and next generation mobile networks. In fact, with both our theoretical and practical results we
show evidences that more than one third of the resources can be saved and even larger gain can
be achieved for data rate enhancements.Programa Oficial de Doctorado en IngenierĂa TelemĂĄticaPresidente: Albert Banchs Roca.- Presidente: Pablo Serrano Yañez-Mingot.- Secretario: Jorge OrtĂn Gracia.- Vocal: Guevara Noubi
Data-Driven Dynamic Robust Resource Allocation: Application to Efficient Transportation
The transformation to smarter cities brings an array of emerging urbanization challenges. With the development of technologies such as sensor networks, storage devices, and cloud computing, we are able to collect, store, and analyze a large amount of data in real time. Modern cities have brought to life unprecedented opportunities and challenges for allocating limited resources in a data-driven way. Intelligent transportation system is one emerging research area, in which sensing data provides us opportunities for understanding spatial-temporal patterns of demand human and mobility. However, greedy or matching algorithms that only deal with known requests are far from efficient in the long run without considering demand information predicted based on data.
In this dissertation, we develop a data-driven robust resource allocation framework to consider spatial-temporally correlated demand and demand uncertainties, motivated by the problem of efficient dispatching of taxi or autonomous vehicles. We first present a receding horizon control (RHC) framework to dispatch taxis towards predicted demand; this framework incorporates both information from historical record data and real-time GPS location and occupancy status data. It also allows us to allocate resource from a globally optimal perspective in a longer time period, besides the local level greedy or matching algorithm for assigning a passenger pick-up location of each vacant vehicle. The objectives include reducing both current and anticipated future total idle driving distance and matching spatial-temporal ratio between demand and supply for service quality. We then present a robust optimization method to consider spatial-temporally correlated demand model uncertainties that can be expressed in closed convex sets. Uncertainty sets of demand vectors are constructed from data based on theories in hypothesis testing, and the sets provide a desired probabilistic guarantee level for the performance of dispatch solutions. To minimize the average resource allocation cost under demand uncertainties, we develop a general data-driven dynamic distributionally robust resource allocation model. An efficient algorithm for building demand uncertainty sets that compatible with various demand prediction methods is developed. We prove equivalent computationally tractable forms of the robust and distributionally robust resource allocation problems using strong duality. The resource allocation problem aims to balance the demand-supply ratio at different nodes of the network with minimum balancing and re-balancing cost, with decision variables on the denominator that has not been covered by previous work.
Trace-driven analysis with real taxi operational record data of San Francisco shows that the RHC framework reduces the average total idle distance of taxis by 52%, and evaluations with over 100GB of New York City taxi trip data show that robust and distributionally robust dispatch methods reduce the average total idle distance by 10% more compared with non-robust solutions. Besides increasing the service efficiency by reducing total idle driving distance, the resource allocation methods in this dissertation also reduce the demand-supply ratio mismatch error across the city
LAPRA: Location-aware Proactive Resource Allocation
Todayâs indoor wireless networks employ reactive
resource allocation methods to provide fair and efficient usage of the communication system. However, their reactive nature limits the quality of service (QoS) that can be offered to the user locations within the environment. In large crowded areas (airports, conferences), networks can get congested and users may suffer from poor QoS. To mitigate this, we propose and evaluate a location-aware user-centric proactive resource allocation approach (LAPRA), in which the users are proactive and seek good channel quality by moving to locations where the signal quality is good. As a result, the users and their locations are optimized to improve the overall QoS. We demonstrate that the proposed proactive approach enhances the user QoS and improves network throughput of the system
Learning-aided Stochastic Network Optimization with Imperfect State Prediction
We investigate the problem of stochastic network optimization in the presence
of imperfect state prediction and non-stationarity. Based on a novel
distribution-accuracy curve prediction model, we develop the predictive
learning-aided control (PLC) algorithm, which jointly utilizes historic and
predicted network state information for decision making. PLC is an online
algorithm that requires zero a-prior system statistical information, and
consists of three key components, namely sequential distribution estimation and
change detection, dual learning, and online queue-based control.
Specifically, we show that PLC simultaneously achieves good long-term
performance, short-term queue size reduction, accurate change detection, and
fast algorithm convergence. In particular, for stationary networks, PLC
achieves a near-optimal , utility-delay
tradeoff. For non-stationary networks, \plc{} obtains an
utility-backlog tradeoff for distributions that last
time, where
is the prediction accuracy and is a constant (the
Backpressue algorithm \cite{neelynowbook} requires an length
for the same utility performance with a larger backlog). Moreover, PLC detects
distribution change slots faster with high probability ( is the
prediction size) and achieves an convergence time. Our results demonstrate
that state prediction (even imperfect) can help (i) achieve faster detection
and convergence, and (ii) obtain better utility-delay tradeoffs
Final report on the evaluation of RRM/CRRM algorithms
Deliverable public del projecte EVERESTThis deliverable provides a definition and a complete evaluation of the RRM/CRRM algorithms selected in D11 and D15, and evolved and refined on an iterative process. The evaluation will be carried out by means of simulations using the simulators provided at D07, and D14.Preprin
- âŠ