40 research outputs found
Cooperative Uplink Inter-Cell Interference (ICI) Mitigation in 5G Networks
In order to support the new paradigm shift in fifth generation (5G) mobile communication, radically different network architectures, associated technologies and network operation algorithms, need to be developed compared to existing fourth generation (4G) cellular solutions. The evolution toward 5G mobile networks will be characterized by an increasing number of wireless devices, increasing device and service complexity, and the requirement to access mobile services ubiquitously.
To realise the dramatic increase in data rates in particular, research is focused on improving the capacity of current, Long Term Evolution (LTE)-based, 4G network standards, before radical changes are exploited which could include acquiring additional spectrum. The LTE network has a reuse factor of one; hence neighbouring cells/sectors use the same spectrum, therefore making the cell-edge users vulnerable to heavy inter cell interference in addition to the other factors such as fading and path-loss. In this direction, this thesis focuses on improving the performance of cell-edge users in LTE and LTE-Advanced networks by initially implementing a new Coordinated Multi-Point (CoMP) technique to support future 5G networks using smart antennas to mitigate cell-edge user interference in uplink. Successively a novel cooperative uplink inter-cell interference mitigation algorithm based on joint reception at the base station using receiver adaptive beamforming is investigated. Subsequently interference mitigation in a heterogeneous environment for inter Device-to-Device (D2D) communication underlaying cellular network is investigated as the enabling technology for maximising resource block (RB) utilisation in emerging 5G networks. The proximity of users in a network, achieving higher data rates with maximum RB utilisation (as the technology reuses the cellular RB simultaneously), while taking some load off the evolved Node B (eNodeB) i.e. by direct communication between User Equipment (UE), has been explored. Simulation results show that the proximity and transmission power of D2D transmission yields high performance gains for D2D receivers, which was demonstrated to be better than that of cellular UEs with better channel conditions or in close proximity to the eNodeB in the network. It is finally demonstrated that the application, as an extension to the above, of a novel receiver beamforming technique to reduce interference from D2D users, can further enhance network performance.
To be able to develop the aforementioned technologies and evaluate the performance of new algorithms in emerging network scenarios, a beyond the-state-of-the-art LTE system-level-simulator (SLS) was implemented. The new simulator includes Multiple-Input Multiple-Output (MIMO) antenna functionalities, comprehensive channel models (such as Wireless World initiative New Radio II i.e. WINNER II) and adaptive modulation and coding schemes to accurately emulate the LTE and LTE-A network standards
Tractable Resource Management with Uplink Decoupled Millimeter-Wave Overlay in Ultra-Dense Cellular Networks
The forthcoming 5G cellular network is expected to overlay millimeter-wave
(mmW) transmissions with the incumbent micro-wave ({\mu}W) architecture. The
overall mm-{\mu}W resource management should therefore harmonize with each
other. This paper aims at maximizing the overall downlink (DL) rate with a
minimum uplink (UL) rate constraint, and concludes: mmW tends to focus more on
DL transmissions while {\mu}W has high priority for complementing UL, under
time-division duplex (TDD) mmW operations. Such UL dedication of {\mu}W results
from the limited use of mmW UL bandwidth due to excessive power consumption
and/or high peak-to-average power ratio (PAPR) at mobile users. To further
relieve this UL bottleneck, we propose mmW UL decoupling that allows each
legacy {\mu}W base station (BS) to receive mmW signals. Its impact on mm-{\mu}W
resource management is provided in a tractable way by virtue of a novel
closed-form mm-{\mu}W spectral efficiency (SE) derivation. In an ultra-dense
cellular network (UDN), our derivation verifies mmW (or {\mu}W) SE is a
logarithmic function of BS-to-user density ratio. This strikingly simple yet
practically valid analysis is enabled by exploiting stochastic geometry in
conjunction with real three dimensional (3D) building blockage statistics in
Seoul, Korea.Comment: to appear in IEEE Transactions on Wireless Communications (17 pages,
11 figures, 1 table
Investigations of 5G localization with positioning reference signals
TDOA is an user-assisted or network-assisted technique, in which the user equipment calculates the time of arrival of precise positioning reference signals conveyed by mobile base stations and provides information about the measured time of arrival estimates in the direction of the position server. Using multilateration grounded on the TDOA measurements of the PRS received from at least three base stations and known location of these base stations, the location server determines the position of the user equipment.
Different types of factors are responsible for the positioning accuracy in TDOA method, such as the sample rate, the bandwidth, network deployment, the properties of PRS, signal propagation condition, etc. About 50 meters positioning is good for the 4G/LTE users, whereas 5G requires an accuracy less than a meter for outdoor and indoor users. Noteworthy improvements in positioning accuracy can be achievable with the help of redesigning the PRS in 5G technology.
The accuracy for the localization has been studied for different sampling rates along with different algorithms. High accuracy TDOA with 5G positioning reference signal (PRS) for sample rate and bandwidth hasn’t been taken into consideration yet. The key goal of the thesis is to compare and assess the impact of different sampling rates and different bandwidths of PRS on the 5G positioning accuracy.
By performing analysis with variable bandwidths of PRS in resource blocks and comparing all the analyses with different bandwidths of PRS in resource blocks, it is undeniable that there is a meaningful decrease in the RMSE and significant growth in the SNR. The higher bandwidth of PRS in resource blocks brings higher SNR while the RMSE of positioning errors also decreases with higher bandwidth. Also, the number of PRS in resource blocks provides lower SNR with higher RMSE values. The analysis with different bandwidths of PRS in resource blocks reveals keeping the RMSE value lower than a meter each time with different statistics is a positivity of the research.
The positioning accuracy also analyzed with different sample sizes. With an increased sample size, a decrease in the root mean square error and a crucial increase in the SNR was observed.
From this thesis investigation, it is inevitable to accomplish that two different analyses (sample size and bandwidth) done in a different way with the targeted output. A bandwidth of 38.4 MHz and sample size N = 700 required to achieve below 1m accuracy with SNR of 47.04 dB
Traffic Steering in Radio Level Integration of LTE and Wi-Fi Networks
A smartphone generates approximately 1, 614 MB of data per month which is 48 times
of the data generated by a typical basic-feature cell phone. Cisco forecasts that the mobile data traffic growth will remain to increase and reach 49 Exabytes per month by 2021.
However, the telecommunication service providers/operators face many challenges in order
to improve cellular network capacity to match these ever-increasing data demands due to
low, almost flat Average Revenue Per User (ARPU) and low Return on Investment (RoI).
Spectrum resource crunch and licensing requirement for operation in cellular bands further
complicate the procedure to support and manage the network.
In order to deal with the aforementioned challenges, one of the most vital solutions is
to leverage the integration benefits of cellular networks with unlicensed operation of Wi-Fi
networks. A closer level of cellular and Wi-Fi coupling/interworking improves Quality of
Service (QoS) by unified connection management to user devices (UEs). It also offloads
a significant portion of user traffic from cellular Base Station (BS) to Wi-Fi Access Point
(AP). In this thesis, we have considered the cellular network to be Long Term Evolution
(LTE) popularly known as 4G-LTE for interworking with Wi-Fi.
Third Generation Partnership Project (3GPP) defined various LTE and Wi-Fi interworking architectures from Rel-8 to Rel-11. Because of the limitations in these legacy LTE
Wi-Fi interworking solutions, 3GPP proposed Radio Level Integration (RLI) architectures
to enhance flow mobility and to react fast to channel dynamics. RLI node encompasses link
level connection between Small cell
deployments, (ii) Meeting Guaranteed Bit Rate (GBR) requirements of the users including
those experiencing poor Signal to Interference plus Noise Ratio (SINR), and (iii) Dynamic
steering of the flows across LTE and Wi-Fi links to maximize the system throughput.
The second important problem addressed is the uplink traffic steering. To enable efficient uplink traffic steering in LWIP system, in this thesis, Network Coordination Function
(NCF) is proposed. NCF is realized at the LWIP node by implementing various uplink traffic steering algorithms. NCF encompasses four different uplink traffic steering algorithms
for efficient utilization of Wi-Fi resources in LWIP system. NCF facilitates the network to
take intelligent decisions rather than individual UEs deciding to steer the uplink traffic onto
LTE link or Wi-Fi link. The NCF algorithms work by leveraging the availability of LTE as
the anchor to improvise the channel utilization of Wi-Fi.
The third most important problem is to enable packet level steering in LWIP. When
data rates of LTE and Wi-Fi links are incomparable, steering packets across the links create
problems for TCP traffic. When the packets are received Out-of-Order (OOO) at the TCP
receiver due to variation in delay experienced on each link, it leads to the generation of
DUPlicate ACKnowledgements (DUP-ACK). These unnecessary DUP-ACKs adversely affect the TCP congestion window growth and thereby lead to poor TCP performance. This
thesis addresses this problem by proposing a virtual congestion control mechanism (VIrtual
congeStion control wIth Boost acknowLedgEment -VISIBLE). The proposed mechanism
not only improves the throughput of a flow by reducing the number of unnecessary DUPACKs delivered to the TCP sender but also sends Boost ACKs in order to rapidly grow the
congestion window to reap in aggregation benefits of heterogeneous links.
The fourth problem considered is the placement of LWIP nodes. In this thesis, we have
addressed problems pertaining to the dense deployment of LWIP nodes. LWIP deployment
can be realized in colocated and non-colocated fashion. The placement of LWIP nodes is
done with the following objectives: (i) Minimizing the number of LWIP nodes deployed
without any coverage holes, (ii) Maximizing SINR in every sub-region of a building, and
(iii) Minimizing the energy spent by UEs and LWIP nodes.
Finally, prototypes of RLI architectures are presented (i.e., LWIP and LWA testbeds).
The prototypes are developed using open source LTE platform OpenAirInterface (OAI) and
commercial-off-the-shelf hardware components. The developed LWIP prototype is made to
work with commercial UE (Nexus 5). The LWA prototype requires modification at the UE
protocol stack, hence it is realized using OAI-UE. The developed prototypes are coupled
with the legacy multipath protocol such as MPTCP to investigate the coupling benefits
Recommended from our members
Energy Efficient Cloud Computing Based Radio Access Networks in 5G. Design and evaluation of an energy aware 5G cloud radio access networks framework using base station sleeping, cloud computing based workload consolidation and mobile edge computing
Fifth Generation (5G) cellular networks will experience a thousand-fold increase in data traffic with over 100 billion connected devices by 2020. In order to support this skyrocketing traffic demand, smaller base stations (BSs) are deployed to increase capacity. However, more BSs increase energy consumption which contributes to operational expenditure (OPEX) and CO2 emissions. Also, an introduction of a plethora of 5G applications running in the mobile devices cause a significant amount of energy consumption in the mobile devices. This thesis presents a novel framework for energy efficiency in 5G cloud radio access networks (C-RAN) by leveraging cloud computing technology. Energy efficiency is achieved in three ways; (i) at the radio side of H-C-RAN (Heterogeneous C-RAN), a dynamic BS switching off algorithm is proposed to minimise energy consumption while maintaining Quality of Service (QoS), (ii) in the BS cloud, baseband workload consolidation schemes are proposed based on simulated annealing and genetic algorithms to minimise energy consumption in the cloud, where also advanced fuzzy based admission control with pre-emption is implemented to improve QoS and resource utilisation (iii) at the mobile device side, Mobile Edge Computing (MEC) is used where computer intensive tasks from the mobile device are executed in the MEC server in the cloud. The simulation results show that the proposed framework effectively reduced energy consumption by up to 48% within RAN and 57% in the mobile devices, and improved network energy efficiency by a factor of 10, network throughput by a factor of 2.7 and resource utilisation by 54% while maintaining QoS
Terminal LTE flexível
Mstrado em Engenharia Eletrónica e TelecomunicaçõesAs redes móveis estão em constante evolução. A geração atual (4G) de
redes celulares de banda larga e representada pelo standard Long Term
Evolution (LTE), definido pela 3rd Generation Partnership Project (3GPP).
Existe uma elevada procura/uso da rede LTE, com um aumento exponencial
do número de dispositivos móveis a requerer uma ligação à Internet de alto
débito. Isto pode conduzir à sobrelotação do espetro, levando a que o sinal
tenha que ser reforçado e a cobertura melhorada em locais específicos, tal
como em grandes conferências, festivais e eventos desportivos. Por outro
lado, seria uma vantagem importante se os utilizadores pudessem continuar
a usar os seus equipamentos e terminais em situações onde o acesso a redes
4G é inexistente, tais como a bordo de um navio, eventos esporádicos em
localizações remotas ou em cenários de catástrofe, em que as infraestruturas
que permitem as telecomunicações foram danificadas e a cobertura
temporária de rede pode ser decisiva em processos de salvamento. Assim
sendo, existe uma motivação clara por trás do desenvolvimento de uma
infraestrutura celular totalmente reconfigurável e que preencha as características mencionadas anteriormente.
Uma possível abordagem consiste numa plataforma de rádio definido por
software (SDR), de código aberto, que implementa o standard LTE e corre
em processadores de uso geral (GPPs), tornando possível construir uma rede
completa investindo somente em hardware - computadores e front-ends de
radiofrequência (RF). Após comparação e análise de várias plataformas LTE
de código aberto foi selecionado o OpenAirInterface (OAI) da EURECOM,
que disponibiliza uma implementação compatível com a Release 8.6 da
3GPP (com parte das funcionalidades da Release 10).
O principal objectivo desta dissertação é a implementação de um User
Equipment (UE) flexível, usando plataformas SDR de código aberto que corram
num computador de placa única (SBC) compacto e de baixa potência,
integrado com um front-end de RF - Universal Software Radio Peripheral
(USRP). A transmissão de dados em tempo real usando os modos de duplexagem
Time Division Duplex (TDD) e Frequency Division Duplex (FDD) é suportada e a reconfiguração de certos parâmetros é permitida, nomeadamente
a frequência portadora, a largura de banda e o número de Resource
Blocks (RBs) usados. Além disso, é possível partilhar os dados móveis LTE
com utilizadores que estejam próximos, semelhante ao que acontece com
um hotspot de Wi-Fi. O processo de implementação é descrito, incluindo
todos os passos necessários para o seu desenvolvimento, englobando o port
do UE de um computador para um SBC. Finalmente, a performance da rede
é analisada, discutindo os valores de débitos obtidos.Mobile networks are constantly evolving. 4G is the current generation of
broadband cellular network technology and is represented by the Long Term
Evolution (LTE) standard, de ned by 3rd Generation Partnership Project
(3GPP). There's a high demand for LTE at the moment, with the number
of mobile devices requiring an high-speed Internet connection increasing exponentially.
This may overcrowd the spectrum on the existing deployments
and the signal needs to be reinforced and coverage improved in speci c sites,
such as large conferences, festivals and sport events. On the other hand,
it would be an important advantage if users could continue to use their
equipment and terminals in situations where cellular networks aren't usually
available, such as on board of a cruise ship, sporadic events in remote
locations, or in catastrophe scenarios in which the telecommunication infrastructure
was damaged and the rapid deployment of a temporary network
can save lives. In all of these situations, the availability of
exible and easily
deployable cellular base stations and user terminals operating on standard
or custom bands would be very desirable. Thus, there is a clear motivation
for the development of a fully recon gurable cellular infrastructure solution
that ful lls these requirements.
A possible approach is an open-source, low-cost and low maintenance
Software-De ned Radio (SDR) software platform that implements the LTE
standard and runs on General Purpose Processors (GPPs), making it possible
to build an entire network while only spending money on the hardware
itself - computers and Radio-Frequency (RF) front-ends. After comparison
and analysis of several open-source LTE SDR platforms, the EURECOM's
OpenAirInterface (OAI) was chosen, providing a 3GPP standard-compliant
implementation of Release 8.6 (with a subset of Release 10 functionalities).
The main goal of this dissertation is the implementation of a
exible opensource
LTE User Equipment (UE) software radio platform on a compact
and low-power Single Board Computer (SBC) device, integrated with an
RF hardware front-end - Universal Software Radio Peripheral (USRP). It
supports real-time Time Division Duplex (TDD) and Frequency Division
Duplex (FDD) LTE modes and the recon guration of several parameters,
namely the carrier frequency, bandwidth and the number of LTE Resource
Blocks (RB) used. It can also share its LTE mobile data with nearby users,
similarly to a Wi-Fi hotspot. The implementation is described through
its several developing steps, including the porting of the UE from a regular
computer to a SBC. The performance of the network is then analysed based
on measured results of throughput
Optimization of Handover, Survivability, Multi-Connectivity and Secure Slicing in 5G Cellular Networks using Matrix Exponential Models and Machine Learning
Title from PDF of title page, viewed January 31, 2023Dissertation advisor: Cory BeardVitaIncludes bibliographical references (pages 173-194)Dissertation (Ph.D.)--Department of Computer Science and Electrical Engineering. University of Missouri--Kansas City, 2022This works proposes optimization of cellular handovers, cellular network survivability modeling, multi-connectivity and secure network slicing using matrix exponentials and machine learning techniques. We propose matrix exponential (ME) modeling of handover arrivals with the potential to much more accurately characterize arrivals and prioritize resource allocation for handovers, especially handovers for emergency or public safety needs. With the use of a ‘B’ matrix for representing a handover arrival, we have a rich set of dimensions to model system handover behavior. We can study multiple parameters and the interactions between system events along with the user mobility, which would trigger a handoff in any given scenario. Additionally, unlike any traditional handover improvement scheme, we develop a ‘Deep-Mobility’ model by implementing a deep learning neural network (DLNN) to manage network mobility, utilizing in-network deep learning and prediction. We use the radio and the network key performance indicators (KPIs) to train our model to analyze network traffic and handover requirements.
Cellular network design must incorporate disaster response, recovery and repair scenarios. Requirements for high reliability and low latency often fail to incorporate network survivability for mission critical and emergency services. Our Matrix Exponential (ME) model shows how survivable networks can be designed based on controlling numbers of crews, times taken for individual repair stages, and the balance between fast and slow repairs. Transient and the steady state representations of system repair models, namely, fast and slow repairs for networks consisting of multiple repair crews have been analyzed. Failures are exponentially modeled as per common practice, but ME distributions describe the more complex recovery processes.
In some mission critical communications, the availability requirements may exceed five or even six nines (99.9999%). To meet such a critical requirement and minimize the impact of mobility during handover, a Fade Duration Outage Probability (FDOP) based multiple radio link connectivity handover method has been proposed. By applying such a method, a high degree of availability can be achieved by utilizing two or more uncorrelated links based on minimum FDOP values. Packet duplication (PD) via multi-connectivity is a method of compensating for lost packets on a wireless channel. Utilizing two or more uncorrelated links, a high degree of availability can be attained with this strategy. However, complete packet duplication is inefficient and frequently unnecessary. We provide a novel adaptive fractional packet duplication (A-FPD) mechanism for enabling and disabling packet duplication based on a variety of parameters.
We have developed a ‘DeepSlice’ model by implementing Deep Learning (DL) Neural Network to manage network load efficiency and network availability, utilizing in-network deep learning and prediction. Our Neural Network based ‘Secure5G’ Network Slicing model will proactively detect and eliminate threats based on incoming connections before they infest the 5G core network elements. These will enable the network operators to sell network slicing as-a-service to serve diverse services efficiently over a single infrastructure with higher level of security and reliability.Introduction -- Matrix exponential and deep learning neural network modeling of cellular handovers -- Survivability modeling in cellular networks -- Multi connectivity based handover enhancement and adaptive fractional packet duplication in 5G cellular networks -- Deepslice and Secure5G: a deep learning framework towards an efficient, reliable and secure network slicing in 5G networks -- Conclusion and future scop
Gestion conjointe de ressources de communication et de calcul pour les réseaux sans fils à base de cloud
Mobile Edge Cloud brings the cloud closer to mobile users by moving the cloud computational efforts from the internet to the mobile edge. We adopt a local mobile edge cloud computing architecture, where small cells are empowered with computational and storage capacities. Mobile users’ offloaded computational tasks are executed at the cloud-enabled small cells. We propose the concept of small cells clustering for mobile edge computing, where small cells cooperate in order to execute offloaded computational tasks. A first contribution of this thesis is the design of a multi-parameter computation offloading decision algorithm, SM-POD. The proposed algorithm consists of a series of low complexity successive and nested classifications of computational tasks at the mobile side, leading to local computation, or offloading to the cloud. To reach the offloading decision, SM-POD jointly considers computational tasks, handsets, and communication channel parameters. In the second part of this thesis, we tackle the problem of small cell clusters set up for mobile edge cloud computing for both single-user and multi-user cases. The clustering problem is formulated as an optimization that jointly optimizes the computational and communication resource allocation, and the computational load distribution on the small cells participating in the computation cluster. We propose a cluster sparsification strategy, where we trade cluster latency for higher system energy efficiency. In the multi-user case, the optimization problem is not convex. In order to compute a clustering solution, we propose a convex reformulation of the problem, and we prove that both problems are equivalent. With the goal of finding a lower complexity clustering solution, we propose two heuristic small cells clustering algorithms. The first algorithm is based on resource allocation on the serving small cells where tasks are received, as a first step. Then, in a second step, unserved tasks are sent to a small cell managing unit (SCM) that sets up computational clusters for the execution of these tasks. The main idea of this algorithm is task scheduling at both serving small cells, and SCM sides for higher resource allocation efficiency. The second proposed heuristic is an iterative approach in which serving small cells compute their desired clusters, without considering the presence of other users, and send their cluster parameters to the SCM. SCM then checks for excess of resource allocation at any of the network small cells. SCM reports any load excess to serving small cells that re-distribute this load on less loaded small cells. In the final part of this thesis, we propose the concept of computation caching for edge cloud computing. With the aim of reducing the edge cloud computing latency and energy consumption, we propose caching popular computational tasks for preventing their re-execution. Our contribution here is two-fold: first, we propose a caching algorithm that is based on requests popularity, computation size, required computational capacity, and small cells connectivity. This algorithm identifies requests that, if cached and downloaded instead of being re-computed, will increase the computation caching energy and latency savings. Second, we propose a method for setting up a search small cells cluster for finding a cached copy of the requests computation. The clustering policy exploits the relationship between tasks popularity and their probability of being cached, in order to identify possible locations of the cached copy. The proposed method reduces the search cluster size while guaranteeing a minimum cache hit probability.Cette thèse porte sur le paradigme « Mobile Edge cloud» qui rapproche le cloud des utilisateurs mobiles et qui déploie une architecture de clouds locaux dans les terminaisons du réseau. Les utilisateurs mobiles peuvent désormais décharger leurs tâches de calcul pour qu’elles soient exécutées par les femto-cellules (FCs) dotées de capacités de calcul et de stockage. Nous proposons ainsi un concept de regroupement de FCs dans des clusters de calculs qui participeront aux calculs des tâches déchargées. A cet effet, nous proposons, dans un premier temps, un algorithme de décision de déportation de tâches vers le cloud, nommé SM-POD. Cet algorithme prend en compte les caractéristiques des tâches de calculs, des ressources de l’équipement mobile, et de la qualité des liens de transmission. SM-POD consiste en une série de classifications successives aboutissant à une décision de calcul local, ou de déportation de l’exécution dans le cloud.Dans un deuxième temps, nous abordons le problème de formation de clusters de calcul à mono-utilisateur et à utilisateurs multiples. Nous formulons le problème d’optimisation relatif qui considère l’allocation conjointe des ressources de calculs et de communication, et la distribution de la charge de calcul sur les FCs participant au cluster. Nous proposons également une stratégie d’éparpillement, dans laquelle l’efficacité énergétique du système est améliorée au prix de la latence de calcul. Dans le cas d’utilisateurs multiples, le problème d’optimisation d’allocation conjointe de ressources n’est pas convexe. Afin de le résoudre, nous proposons une reformulation convexe du problème équivalente à la première puis nous proposons deux algorithmes heuristiques dans le but d’avoir un algorithme de formation de cluster à complexité réduite. L’idée principale du premier est l’ordonnancement des tâches de calculs sur les FCs qui les reçoivent. Les ressources de calculs sont ainsi allouées localement au niveau de la FC. Les tâches ne pouvant pas être exécutées sont, quant à elles, envoyées à une unité de contrôle (SCM) responsable de la formation des clusters de calculs et de leur exécution. Le second algorithme proposé est itératif et consiste en une formation de cluster au niveau des FCs ne tenant pas compte de la présence d’autres demandes de calculs dans le réseau. Les propositions de cluster sont envoyées au SCM qui évalue la distribution des charges sur les différentes FCs. Le SCM signale tout abus de charges pour que les FCs redistribuent leur excès dans des cellules moins chargées.Dans la dernière partie de la thèse, nous proposons un nouveau concept de mise en cache des calculs dans l’Edge cloud. Afin de réduire la latence et la consommation énergétique des clusters de calculs, nous proposons la mise en cache de calculs populaires pour empêcher leur réexécution. Ici, notre contribution est double : d’abord, nous proposons un algorithme de mise en cache basé, non seulement sur la popularité des tâches de calculs, mais aussi sur les tailles et les capacités de calculs demandés, et la connectivité des FCs dans le réseau. L’algorithme proposé identifie les tâches aboutissant à des économies d’énergie et de temps plus importantes lorsqu’elles sont téléchargées d’un cache au lieu d’être recalculées. Nous proposons ensuite d’exploiter la relation entre la popularité des tâches et la probabilité de leur mise en cache, pour localiser les emplacements potentiels de leurs copies. La méthode proposée est basée sur ces emplacements, et permet de former des clusters de recherche de taille réduite tout en garantissant de retrouver une copie en cache