IMDEA Networks Institute Digital Repository
Not a member yet
1913 research outputs found
Sort by
Demystifying Resource Allocation Policies in Operational 5G mmWave Networks
Five years after the initial 5G rollout, several research works have analyzed the performance of operational 5G mmWave networks. However, these measurement studies primarily focus on single-user performance, leaving the sharing and resource allocation policies largely unexplored. In this paper, we fill this gap by conducting the first systematic study, to our best knowledge, of resource allocation policies of current 5G mmWave mobile network deployments through an extensive measurement campaign across four major US cities and two major mobile operators. Our study reveals that resource allocation among multiple flows is strictly governed by the cellular operators and flows are not allowed to compete with each other in a shared queue. Operators employ simple threshold-based policies and often over-allocate resources to new flows with low traffic demands or reserve some capacity for future usage. Interestingly, these policies vary not only among operators but also for a single operator in different cities. We also discuss a number of anomalous behaviors we observe in our experiments across different cities and operators.TRUEpu
NLP-Driven Approaches to Measuring Online Polarization and Radicalization
The growing popularity of social media has coincided with a massive number of real-world issues and crises that are controversial and polarizing. Recent issues such as Russo-Ukrainian and Israeli-Palestinian conflicts, alongside classic issues such as abortion-ban and gun-control, have raised heated debates offline and online. Throughout the past two decades, Computational Social Scientists have been introducing methods of modeling and measuring online polarization and radicalization. Yet, most of the proposed methods rely on traditional tools such as graph analysis and classic NLP models. These tools are accompanied by limitations in terms of scalability, granularity, and availability of data (e.g., follow network is no longer publicly available on Twitter).
Fortunately, in the past few years, thanks to the invention of the transformers architecture, the world has witnessed massive breakthroughs in the field of Natural Language Processing (NLP). Especially, Large Language Models (LLMs) have grasped the attention of both public and scientific communities. These breakthroughs have also created unprecedented opportunities for advancing classic techniques in various domains of Computational Social Sciences, including polarization detection and opinion mining.
This thesis aims to propose novel approaches using state-of-the-art NLP techniques to model and track polarization on social media. It introduces a scalable method for quantifying echo chambers with sentence transformers, revealing asymmetries in discourse diversity across political ideologies. Furthermore, it applies LLMs to analyze the content of cross-partisan interactions, showing that cross-party engagement does not necessarily lead to productive discourse. The thesis also investigates radicalization in gender-based communities and compares the spread of radical content across platforms like Reddit and Discord. Lastly, it addresses the limitations of existing language models in detecting stance polarity by fine-tuning a sentence transformer to become stance-aware, enabling more accurate detection of opposing viewpoints on similar topics. Together, these contributions offer Computational Social Scientists new tools for understanding polarization, radicalization, and bias in online environments.Telematics EngineeringUniversidad Carlos III de Madrid, Spai
A Scalable DNN Training Framework for Traffic Forecasting in Mobile Networks
The exponential growth of mobile data traffic
demands efficient and scalable forecasting methods to optimize
network performance. Traditional approaches, like training
individual models for each Base Station ( BS) are computationally
prohibitive for large-scale production deployments. In this paper,
we propose a scalable Deep Neural Networks (DNN) training
framework for mobile network traffic forecasting that reduces
input redundancy and computational overhead. We minimize
the number of input probes (traffic monitors at Base Stations
(BSs)) by grouping BS s with temporal similarity using K-means
clustering with Dynamic Time Warping (DTW ) as the distance
metric. Within each cluster, we train a DNN model, selecting
a subset of BSs as inputs to predict future traffic demand for
all BSs in that cluster. To further optimize input selection, we
leverage the well-known EXplainable Artificial Intelligence ( XAI)
technique, LayeR-wise backPropagation ( LRP) to identify the
most influential BS s within each cluster. This makes it possible
to reduce the number of required probes while maintaining high
prediction accuracy. To validate our newly proposed framework,
we conduct experiments on two real-world mobile traffic datasets.
Specifically, our approach achieves competitive accuracy while
reducing the total number of input probes by approximately 81%
compared to state-of-the-art predictors.TRUEinpres
An Urban Geography of Mobile Application Usage: Connecting Demand Dynamics and Urban Fabrics
The surge in usage of mobile applications generates a massive volume of traffic data exhibiting unique dynamics that are hard to unravel. In this work, we leverage factor analysis to pin down recurrent patterns of mobile traffic over the three dimensions of space, time and services in multi-city measurements of unprecedented resolution. We link the revealed structures of real-world mobile demands to urban fabrics, i.e., the combination of infrastructures and social characteristics that determine the functionality of urban territory, hence establishing connections between specific city landscapes and the mobile application consumption they create. Our study provides a new understanding about the diversity of mobile service dynamics in metropolitan areas, including insights on how economic status drives the adoption of specific applications, how residential versus commercial areas create a dichotomy in applications usage, how private and public transport drive surges in the prevalence of different sets of applications or
how nightlife or university studies stimulate the utilization of specific classes of services.European UnionFrench National Research AgencyFonds de Recherche du Qu´ebecTRUEinpres
Steady-state coherence in multipartite quantum systems: its connection with thermodynamic quantities and impact on quantum thermal machines
Understanding how coherence of quantum systems affects thermodynamic quantities, such as work and heat, is essential for harnessing quantumness effectively in thermal quantum technologies. Here, we study the unique contributions of quantum coherence among different subsystems of a multipartite system, specifically in non-equilibrium steady states, to work and heat currents. Our system comprises two coupled ensembles, each consisting of N particles, interacting with two baths of different temperatures, respectively. The particles in an ensemble interact with their bath either simultaneously or sequentially, leading to non-local dissipation and enabling the decomposition of work and heat currents into local and non-local components. We find that the non-local heat current, as well as both the local and non-local work currents, are linked to the system quantum coherence. We provide explicit expressions of coherence-related quantities that determine the work currents under various intrasystem interactions. Our scheme is versatile, capable of functioning as a refrigerator, an engine, and an accelerator, with its performance being highly sensitive to the configuration settings. These findings establish a connection between thermodynamic quantities and quantum coherence, supplying valuable insights for the design of quantum thermal machines.TRUEinpres
Beneath the surface: An analysis of OEM customizations on the Android TLS protocol stack
The open-source nature of the Android Open Source Project (AOSP) allows Original Equipment Manufacturers (OEMs) to customize the Android operating system, contributing to what is known as Android fragmentation. Google has implemented the Compatibility Definition Document (CDD) and the Compatibility Test Suite (CTS) to
ensure the integrity and security of the Android ecosystem. However, the effectiveness of these policies and measures in warranting OEM compliance remains uncertain. This paper empirically studies for the first time the nature of OEM customizations in the Android TLS protocol stack, and their security implications on user-installed mobile apps across thousands of Android models. We find that approximately 80% of the analyzed Android models deviate from the standard AOSP TLS codebase and that OEM customizations often involve code changes in functions used by app developers for enhancing TLS security, like end-point and certificate verification. Our analysis suggests that these customizations are likely influenced by factors such as manufacturers’ supply chain dynamics and patching prioritization tactics, including the need to support legacy components. We conclude by identifying potential root causes and emphasizing the need for stricter policy enforcement, better supply chain controls, and improved patching processes across the ecosystem.Spanish National Cybersecurity Institute (INCIBE)Comunidad de MadridTRUEinpres
Middle-Output Deep Image Prior for Blind Hyperspectral and Multispectral Image Fusion
Obtaining a low-spatial-resolution hyperspectral image (HS) or low-spectral-resolution multispectral (MS) image from a high-resolution (HR) spectral image is straightforward with knowledge of the acquisition models. However, the reverse process, from HS and MS to HR, is an ill-posed problem known as spectral image fusion.} Although recent fusion techniques based on supervised deep learning have shown promising results, these methods require large training datasets involving expensive acquisition costs and long training times. In contrast, unsupervised HS and MS image fusion methods \baccac{have emerged as an alternative to data demand issues; however, they rely on the knowledge of the linear degradation models for optimal performance.} To overcome these challenges, we propose the Middle-Output Deep Image Prior (MODIP) for unsupervised blind HS and MS image fusion. \baccac{MODIP is adjusted for the HS and MS images, and the HR fused image is estimated at} an intermediate layer within the network. The architecture comprises two convolutional \baccac{networks} that reconstruct the HR spectral image from HS and MS inputs, along with two networks that appropriately downscale the estimated HR image to match the available MS and HS \baccac{images}, learning the non-linear degradation models. The network parameters of MODIP are jointly and iteratively adjusted by minimizing a proposed loss function. This approach can handle scenarios where the degradation operators are unknown or partially estimated. To evaluate the performance of MODIP, we test the fusion approach on three simulated spectral image datasets (Pavia University, Salinas Valley, and CAVE) and a real dataset obtained through a testbed implementation in an optical lab. Extensive simulations demonstrate that MODIP outperforms other unsupervised model-based image fusion methods \baccac{by up to 6 dB in PNSR.TRUEpu
SYMBXRL: Symbolic Explainable Deep Reinforcement Learning for Mobile Networks
The operation of future 6th-generation (6G) mobile networks will increasingly rely on the ability of Deep Reinforcement Learning (DRL) to optimize network decisions in real-time. DRL yields demonstrated efficacy in various resource allocation problems, such as joint decisions on user scheduling and
antenna allocation or simultaneous control of computing resources and modulation. However, trained DRL agents are closed-boxes and inherently difficult to explain, which hinders their adoption in production settings. In this paper, we make a step towards removing this critical barrier by presenting SYMBXRL, a novel technique for EXplainable Reinforcement Learning (XRL) that synthesizes human-interpretable explanations for DRL agents. SYMBXRL leverages symbolic AI to produce explanations where key concepts and their relationships are described via intuitive symbols and rules; coupling such a representation with logical reasoning exposes the decision process of DRL agents and offers more comprehensible descriptions of their behaviors compared to existing approaches. We validate SYMBXRL in practical network management use cases supported by DRL, proving that it not only improves the semantics of the explanations but also paves the way for explicit agent control: for instance, it enables intent-based programmatic action steering that improves by 12% the median cumulative reward over a pure DRL solution.TRUEinpres
HELIX: High-speed Real-Time Experimentation Platform for 6G Wireless Networks
Mobile networks are evolving rapidly, with 6G promising unprecedented capabilities in terms of data rates and ultra-low latencies. However, the development of testbed platforms for wireless experimentation has not kept pace. Existing platforms typically offer either end-to-end capabilities with low bandwidth or high band width with limited or no real-time functionality. In this paper, we introduce HELIX , an experimentation platform with 6G scalable real-time capabilities. HELIX integrates a comprehensive physical layer subsystem with multi-numerology support alongside an advanced mixed software-hardware control unit responsible for interacting with the fronthaul network and dynamically configuring
the functional split in real time. On the server side, we implement the necessary drivers and routines to enable seamless integration with O-RAN systems, thus facilitating open and end-to-end experimentation. We demonstrate the capabilities of HELIX through a variety of experiments at sub-6 GHz, 28 GHz, and 60 GHz frequencies. Notably, HELIX achieves data rates of up to 1200 Mbps
using 256-QAM modulation with over 417 MHz of bandwidth, and end-to-end bidirectional latencies of 500 ��s. We show advanced features, including the implementation of Integrated Sensing And
Communication ( ISAC), and discuss how the platform could be extended to support bandwidths of up to 1670 MHz.TRUEinpres
Demonstrating Deep Learning-based Spatial Diffusion
Metadata geolocation, i.e., mapping information collected at a cellular Base Station (BS) to the geographical area it covers, is a central operation in producing statistics from mobile network measurements. This task requires modeling the probability that a device attached to a BS is at a specific location, and it is currently accomplished via simplistic approximations based on Voronoi tessellations. However, Voronoi cells exhibit poor accuracy compared to real-world geolocation data, which can reduce the reliability of downstream research pipelines. To overcome this limitation, DEEPMEND proposes a new data-driven approach relying on a teacher-student paradigm that combines probabilistic inference and deep learning. Similarly to other benchmarks, DEEPMEND can produce geolocation maps using only the BS positions, yielding a 56% accuracy gain compared to Voronoi tessellations. Our demonstrator will show visual and qualitative comparisons between DEEPMEND and several competitor approaches, allowing users to explore BS deployments from different geographical regions and operators.Comunidad de MadridEuropean UnionTRUEinpres