215 research outputs found
Chatbots for Modelling, Modelling of Chatbots
Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202
Detection and Mitigation of Steganographic Malware
A new attack trend concerns the use of some form of steganography and information hiding to make malware stealthier and able to elude many standard security mechanisms. Therefore, this Thesis addresses the detection and the mitigation of this class of threats. In particular, it considers malware implementing covert communications within network traffic or cloaking malicious payloads within digital images.
The first research contribution of this Thesis is in the detection of network covert channels. Unfortunately, the literature on the topic lacks of real traffic traces or attack samples to perform precise tests or security assessments. Thus, a propaedeutic research activity has been devoted to develop two ad-hoc tools. The first allows to create covert channels targeting the IPv6 protocol by eavesdropping flows, whereas the second allows to embed secret data within arbitrary traffic traces that can be replayed to perform investigations in realistic conditions. This Thesis then starts with a security assessment concerning the impact of hidden network communications in production-quality scenarios. Results have been obtained by considering channels cloaking data in the most popular protocols (e.g., TLS, IPv4/v6, and ICMPv4/v6) and showcased that de-facto standard intrusion detection systems and firewalls (i.e., Snort, Suricata, and Zeek) are unable to spot this class of hazards.
Since malware can conceal information (e.g., commands and configuration files) in almost every protocol, traffic feature or network element, configuring or adapting pre-existent security solutions could be not straightforward. Moreover, inspecting multiple protocols, fields or conversations at the same time could lead to performance issues.
Thus, a major effort has been devoted to develop a suite based on the extended Berkeley Packet Filter (eBPF) to gain visibility over different network protocols/components and to efficiently collect various performance indicators or statistics by using a unique technology. This part of research allowed to spot the presence of network covert channels targeting the header of the IPv6 protocol or the inter-packet time of generic network conversations. In addition, the approach based on eBPF turned out to be very flexible and also allowed to reveal hidden data transfers between two processes co-located within the same host. Another important contribution of this part of the Thesis concerns the deployment of the suite in realistic scenarios and its comparison with other similar tools. Specifically, a thorough performance evaluation demonstrated that eBPF can be used to inspect traffic and reveal the presence of covert communications also when in the presence of high loads, e.g., it can sustain rates up to 3 Gbit/s with commodity hardware. To further address the problem of revealing network covert channels in realistic environments, this Thesis also investigates malware targeting traffic generated by Internet of Things devices. In this case, an incremental ensemble of autoencoders has been considered to face the ''unknown'' location of the hidden data generated by a threat covertly exchanging commands towards a remote attacker.
The second research contribution of this Thesis is in the detection of malicious payloads hidden within digital images. In fact, the majority of real-world malware exploits hiding methods based on Least Significant Bit steganography and some of its variants, such as the Invoke-PSImage mechanism. Therefore, a relevant amount of research has been done to detect the presence of hidden data and classify the payload (e.g., malicious PowerShell scripts or PHP fragments). To this aim, mechanisms leveraging Deep Neural Networks (DNNs) proved to be flexible and effective since they can learn by combining raw low-level data and can be updated or retrained to consider unseen payloads or images with different features. To take into account realistic threat models, this Thesis studies malware targeting different types of images (i.e., favicons and icons) and various payloads (e.g., URLs and Ethereum addresses, as well as webshells). Obtained results showcased that DNNs can be considered a valid tool for spotting the presence of hidden contents since their detection accuracy is always above 90% also when facing ''elusion'' mechanisms such as basic obfuscation techniques or alternative encoding schemes.
Lastly, when detection or classification are not possible (e.g., due to resource constraints), approaches enforcing ''sanitization'' can be applied. Thus, this Thesis also considers autoencoders able to disrupt hidden malicious contents without degrading the quality of the image
Systematic Approaches for Telemedicine and Data Coordination for COVID-19 in Baja California, Mexico
Conference proceedings info:
ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologies
Raleigh, HI, United States, March 24-26, 2023
Pages 529-542We provide a model for systematic implementation of telemedicine within a large evaluation center for COVID-19 in the area of Baja California, Mexico. Our model is based on human-centric design factors and cross disciplinary collaborations for scalable data-driven enablement of smartphone, cellular, and video Teleconsul-tation technologies to link hospitals, clinics, and emergency medical services for point-of-care assessments of COVID testing, and for subsequent treatment and quar-antine decisions. A multidisciplinary team was rapidly created, in cooperation with different institutions, including: the Autonomous University of Baja California, the Ministry of Health, the Command, Communication and Computer Control Center
of the Ministry of the State of Baja California (C4), Colleges of Medicine, and the College of Psychologists. Our objective is to provide information to the public and to evaluate COVID-19 in real time and to track, regional, municipal, and state-wide data in real time that informs supply chains and resource allocation with the anticipation of a surge in COVID-19 cases. RESUMEN Proporcionamos un modelo para la implementación sistemática de la telemedicina dentro de un gran centro de evaluación de COVID-19 en el área de Baja California, México. Nuestro modelo se basa en factores de diseño centrados en el ser humano y colaboraciones interdisciplinarias para la habilitación escalable basada en datos de tecnologías de teleconsulta de teléfonos inteligentes, celulares y video para vincular hospitales, clínicas y servicios médicos de emergencia para evaluaciones de COVID en el punto de atención. pruebas, y para el tratamiento posterior y decisiones de cuarentena. Rápidamente se creó un equipo multidisciplinario, en cooperación con diferentes instituciones, entre ellas: la Universidad Autónoma de Baja California, la Secretaría de Salud, el Centro de Comando, Comunicaciones y Control Informático.
de la Secretaría del Estado de Baja California (C4), Facultades de Medicina y Colegio de Psicólogos. Nuestro objetivo es proporcionar información al público y evaluar COVID-19 en tiempo real y rastrear datos regionales, municipales y estatales en tiempo real que informan las cadenas de suministro y la asignación de recursos con la anticipación de un aumento de COVID-19. 19 casos.ICICT 2023: 2023 The 6th International Conference on Information and Computer Technologieshttps://doi.org/10.1007/978-981-99-3236-
Chapter 34 - Biocompatibility of nanocellulose: Emerging biomedical applications
Nanocellulose already proved to be a highly relevant material for biomedical
applications, ensued by its outstanding mechanical properties and, more importantly, its biocompatibility. Nevertheless, despite their previous intensive
research, a notable number of emerging applications are still being developed.
Interestingly, this drive is not solely based on the nanocellulose features, but also
heavily dependent on sustainability. The three core nanocelluloses encompass
cellulose nanocrystals (CNCs), cellulose nanofibrils (CNFs), and bacterial nanocellulose (BNC). All these different types of nanocellulose display highly interesting biomedical properties per se, after modification and when used in
composite formulations. Novel applications that use nanocellulose includewell-known areas, namely, wound dressings, implants, indwelling medical
devices, scaffolds, and novel printed scaffolds. Their cytotoxicity and biocompatibility using recent methodologies are thoroughly analyzed to reinforce their
near future applicability. By analyzing the pristine core nanocellulose, none
display cytotoxicity. However, CNF has the highest potential to fail long-term
biocompatibility since it tends to trigger inflammation. On the other hand, neverdried BNC displays a remarkable biocompatibility. Despite this, all nanocelluloses clearly represent a flag bearer of future superior biomaterials, being
elite materials in the urgent replacement of our petrochemical dependence
IoT and Sensor Networks in Industry and Society
The exponential progress of Information and Communication Technology (ICT) is one of the main elements that fueled the acceleration of the globalization pace. Internet of Things (IoT), Artificial Intelligence (AI) and big data analytics are some of the key players of the digital transformation that is affecting every aspect of human's daily life, from environmental monitoring to healthcare systems, from production processes to social interactions. In less than 20 years, people's everyday life has been revolutionized, and concepts such as Smart Home, Smart Grid and Smart City have become familiar also to non-technical users.
The integration of embedded systems, ubiquitous Internet access, and Machine-to-Machine (M2M) communications have paved the way for paradigms such as IoT and Cyber Physical Systems (CPS) to be also introduced in high-requirement environments such as those related to industrial processes, under the forms of Industrial Internet of Things (IIoT or I2oT) and Cyber-Physical Production Systems (CPPS). As a consequence, in 2011 the German High-Tech Strategy 2020 Action Plan for Germany first envisioned the concept of Industry 4.0, which is rapidly reshaping traditional industrial processes. The term refers to the promise to be the fourth industrial revolution. Indeed, the first industrial revolution was triggered by water and steam power. Electricity and assembly lines enabled mass production in the second industrial revolution. In the third industrial revolution, the introduction of control automation and Programmable Logic Controllers (PLCs) gave a boost to factory production. As opposed to the previous revolutions, Industry 4.0 takes advantage of Internet access, M2M communications, and deep learning not only to improve production efficiency but also to enable the so-called mass customization, i.e. the mass production of personalized products by means of modularized product design and flexible processes.
Less than five years later, in January 2016, the Japanese 5th Science and Technology Basic Plan took a further step by introducing the concept of Super Smart Society or Society 5.0. According to this vision, in the upcoming future, scientific and technological innovation will guide our society into the next social revolution after the hunter-gatherer, agrarian, industrial, and information eras, which respectively represented the previous social revolutions. Society 5.0 is a human-centered society that fosters the simultaneous achievement of economic, environmental and social objectives, to ensure a high quality of life to all citizens. This information-enabled revolution aims to tackle today’s major challenges such as an ageing population, social inequalities, depopulation and constraints related to energy and the environment. Accordingly, the citizens will be experiencing impressive transformations into every aspect of their daily lives.
This book offers an insight into the key technologies that are going to shape the future of industry and society. It is subdivided into five parts: the I Part presents a horizontal view of the main enabling technologies, whereas the II-V Parts offer a vertical perspective on four different environments.
The I Part, dedicated to IoT and Sensor Network architectures, encompasses three Chapters. In Chapter 1, Peruzzi and Pozzebon analyse the literature on the subject of energy harvesting solutions for IoT monitoring systems and architectures based on Low-Power Wireless Area Networks (LPWAN). The Chapter does not limit the discussion to Long Range Wise Area Network (LoRaWAN), SigFox and Narrowband-IoT (NB-IoT) communication protocols, but it also includes other relevant solutions such as DASH7 and Long Term Evolution MAchine Type Communication (LTE-M). In Chapter 2, Hussein et al. discuss the development of an Internet of Things message protocol that supports multi-topic messaging. The Chapter further presents the implementation of a platform, which integrates the proposed communication protocol, based on Real Time Operating System. In Chapter 3, Li et al. investigate the heterogeneous task scheduling problem for data-intensive scenarios, to reduce the global task execution time, and consequently reducing data centers' energy consumption. The proposed approach aims to maximize the efficiency by comparing the cost between remote task execution and data migration.
The II Part is dedicated to Industry 4.0, and includes two Chapters. In Chapter 4, Grecuccio et al. propose a solution to integrate IoT devices by leveraging a blockchain-enabled gateway based on Ethereum, so that they do not need to rely on centralized intermediaries and third-party services.
As it is better explained in the paper, where the performance is evaluated in a food-chain traceability application, this solution is particularly beneficial in Industry 4.0 domains. Chapter 5, by De Fazio et al., addresses the issue of safety in workplaces by presenting a smart garment that integrates several low-power sensors to monitor environmental and biophysical parameters. This enables the detection of dangerous situations, so as to prevent or at least reduce the consequences of workers accidents.
The III Part is made of two Chapters based on the topic of Smart Buildings. In Chapter 6, Petroșanu et al. review the literature about recent developments in the smart building sector, related to the use of supervised and unsupervised machine learning models of sensory data. The Chapter poses particular attention on enhanced sensing, energy efficiency, and optimal building management. In Chapter 7, Oh examines how much the education of prosumers about their energy consumption habits affects power consumption reduction and encourages energy conservation, sustainable living, and behavioral change, in residential environments. In this Chapter, energy consumption monitoring is made possible thanks to the use of smart plugs.
Smart Transport is the subject of the IV Part, including three Chapters. In Chapter 8, Roveri et al. propose an approach that leverages the small world theory to control swarms of vehicles connected through Vehicle-to-Vehicle (V2V) communication protocols. Indeed, considering a queue dominated by short-range car-following dynamics, the Chapter demonstrates that safety and security are increased by the introduction of a few selected random long-range communications. In Chapter 9, Nitti et al. present a real time system to observe and analyze public transport passengers' mobility by tracking them throughout their journey on public transport vehicles. The system is based on the detection of the active Wi-Fi interfaces, through the analysis of Wi-Fi probe requests. In Chapter 10, Miler et al. discuss the development of a tool for the analysis and comparison of efficiency indicated by the integrated IT systems in the operational activities undertaken by Road Transport Enterprises (RTEs). The authors of this Chapter further provide a holistic evaluation of efficiency of telematics systems in RTE operational management.
The book ends with the two Chapters of the V Part on Smart Environmental Monitoring. In Chapter 11, He et al. propose a Sea Surface Temperature Prediction (SSTP) model based on time-series similarity measure, multiple pattern learning and parameter optimization. In this strategy, the optimal parameters are determined by means of an improved Particle Swarm Optimization method. In Chapter 12, Tsipis et al. present a low-cost, WSN-based IoT system that seamlessly embeds a three-layered cloud/fog computing architecture, suitable for facilitating smart agricultural applications, especially those related to wildfire monitoring.
We wish to thank all the authors that contributed to this book for their efforts. We express our gratitude to all reviewers for the volunteering support and precious feedback during the review process. We hope that this book provides valuable information and spurs meaningful discussion among researchers, engineers, businesspeople, and other experts about the role of new technologies into industry and society
Cybersecurity of Digital Service Chains
This open access book presents the main scientific results from the H2020 GUARD project. The GUARD project aims at filling the current technological gap between software management paradigms and cybersecurity models, the latter still lacking orchestration and agility to effectively address the dynamicity of the former. This book provides a comprehensive review of the main concepts, architectures, algorithms, and non-technical aspects developed during three years of investigation; the description of the Smart Mobility use case developed at the end of the project gives a practical example of how the GUARD platform and related technologies can be deployed in practical scenarios. We expect the book to be interesting for the broad group of researchers, engineers, and professionals daily experiencing the inadequacy of outdated cybersecurity models for modern computing environments and cyber-physical systems
Ultra-reliable Low-latency, Energy-efficient and Computing-centric Software Data Plane for Network Softwarization
Network softwarization plays a significantly important role in the development and deployment of the latest communication system for 5G and beyond. A more flexible and intelligent network architecture can be enabled to provide support for agile network management, rapid launch of innovative network services with much reduction in Capital Expense (CAPEX) and Operating Expense (OPEX). Despite these benefits, 5G system also raises unprecedented challenges as emerging machine-to-machine and human-to-machine communication use cases require Ultra-Reliable Low Latency Communication (URLLC). According to empirical measurements performed by the author of this dissertation on a practical testbed, State of the Art (STOA) technologies and systems are not able to achieve the one millisecond end-to-end latency requirement of the 5G standard on Commercial Off-The-Shelf (COTS) servers. This dissertation performs a comprehensive introduction to three innovative approaches that can be used to improve different aspects of the current software-driven network data plane. All three approaches are carefully designed, professionally implemented and rigorously evaluated. According to the measurement results, these novel approaches put forward the research in the design and implementation of ultra-reliable low-latency, energy-efficient and computing-first software data plane for 5G communication system and beyond
Dynamic Resource Allocation in Embedded, High-Performance and Cloud Computing
The availability of many-core computing platforms enables a wide variety of technical solutions for systems across the embedded, high-performance and cloud computing domains. However, large scale manycore systems are notoriously hard to optimise. Choices regarding resource allocation alone can account for wide variability in timeliness and energy dissipation (up to several orders of magnitude). Dynamic Resource Allocation in Embedded, High-Performance and Cloud Computing covers dynamic resource allocation heuristics for manycore systems, aiming to provide appropriate guarantees on performance and energy efficiency. It addresses different types of systems, aiming to harmonise the approaches to dynamic allocation across the complete spectrum between systems with little flexibility and strict real-time guarantees all the way to highly dynamic systems with soft performance requirements. Technical topics presented in the book include: • Load and Resource Models• Admission Control• Feedback-based Allocation and Optimisation• Search-based Allocation Heuristics• Distributed Allocation based on Swarm Intelligence• Value-Based AllocationEach of the topics is illustrated with examples based on realistic computational platforms such as Network-on-Chip manycore processors, grids and private cloud environments
Cellular and Wi-Fi technologies evolution: from complementarity to competition
This PhD thesis has the characteristic to span over a long time because while working on it, I was working as a research engineer at CTTC with highly demanding development duties. This has delayed the deposit more than I would have liked. On the other hand, this has given me the privilege of witnessing and studying how wireless technologies have been evolving over a decade from 4G to 5G and beyond.
When I started my PhD thesis, IEEE and 3GPP were defining the two main wireless technologies at the time, Wi-Fi and LTE, for covering two substantially complementary market targets. Wi-Fi was designed to operate mostly indoor, in unlicensed spectrum, and was aimed to be a simple and cheap technology. Its primary technology for coexistence was based on the assumption that the spectrum on which it was operating was for free, and so it was designed with interference avoidance through the famous CSMA/CA protocol. On the other hand, 3GPP was designing technologies for licensed spectrum, a costly kind of spectrum. As a result, LTE was designed to take the best advantage of it while providing the best QoE in mainly outdoor scenarios.
The PhD thesis starts in this context and evolves with these two technologies. In the first chapters, the thesis studies radio resource management solutions for standalone operation of Wi-Fi in unlicensed and LTE in licensed spectrum. We anticipated the now fundamental machine learning trend by working on machine learning-based radio resource management solutions to improve LTE and Wi-Fi operation in their respective spectrum. We pay particular attention to small cell deployments aimed at improving the spectrum efficiency in licensed spectrum, reproducing small range scenarios typical of Wi-Fi settings.
IEEE and 3GPP followed evolving the technologies over the years: Wi-Fi has grown into a much more complex and sophisticated technology, incorporating the key features of cellular technologies, like HARQ, OFDMA, MU-MIMO, MAC scheduling and spatial reuse. On the other hand, since Release 13, cellular networks have also been designed for unlicensed spectrum. As a result, the two last chapters of this thesis focus on coexistence scenarios, in which LTE needs to be designed to coexist with Wi-Fi fairly, and NR, the radio access for 5G, with Wi-Fi in 5 GHz and WiGig in 60 GHz. Unlike LTE, which was adapted to operate in unlicensed spectrum, NR-U is natively designed with this feature, including its capability to operate in unlicensed in a complete standalone fashion, a fundamental new milestone for cellular. In this context, our focus of analysis changes. We consider that these two technological families are no longer targeting complementarity but are now competing, and we claim that this will be the trend for the years to come.
To enable the research in these multi-RAT scenarios, another fundamental result of this PhD thesis, besides the scientific contributions, is the release of high fidelity models for LTE and NR and their coexistence with Wi-Fi and WiGig to the ns-3 open-source community. ns-3 is a popular open-source network simulator, with the characteristic to be multi-RAT and so naturally allows the evaluation of coexistence scenarios between different technologies. These models, for which I led the development, are by academic citations, the most used open-source simulation models for LTE and NR and havereceived fundings from industry (Ubiquisys, WFA, SpiderCloud, Interdigital, Facebook) and federal agencies (NIST, LLNL) over the years.Aquesta tesi doctoral té la característica d’allargar-se durant un llarg període de temps ja que mentre treballava en ella, treballava com a enginyera investigadora a CTTC amb tasques de desenvolupament molt exigents. Això ha endarrerit el dipositar-la més del que m’hagués agradat. D’altra banda, això m’ha donat el privilegi de ser testimoni i estudiar com han evolucionat les tecnologies sense fils durant més d’una dècada des del 4G fins al 5G i més enllà. Quan vaig començar la tesi doctoral, IEEE i 3GPP estaven definint les dues tecnologies sense fils principals en aquell moment, Wi-Fi i LTE, que cobreixen dos objectius de mercat substancialment complementaris. Wi-Fi va ser dissenyat per funcionar principalment en interiors, en espectre sense llicència, i pretenia ser una tecnologia senzilla i barata. La seva tecnologia primària per a la convivència es basava en el supòsit que l’espectre en el que estava operant era de franc, i, per tant, es va dissenyar simplement evitant interferències a través del famós protocol CSMA/CA. D’altra banda, 3GPP estava dissenyant tecnologies per a espectres amb llicència, un tipus d’espectre costós. Com a resultat, LTE està dissenyat per treure’n el màxim profit alhora que proporciona el millor QoE en escenaris principalment a l’aire lliure. La tesi doctoral comença amb aquest context i evoluciona amb aquestes dues tecnologies. En els primers capítols, estudiem solucions de gestió de recursos de radio per a operacions en espectre de Wi-Fi sense llicència i LTE amb llicència. Hem anticipat l’actual tendència fonamental d’aprenentatge automàtic treballant solucions de gestió de recursos de radio basades en l’aprenentatge automàtic per millorar l’LTE i Wi-Fi en el seu espectre respectiu. Prestem especial atenció als desplegaments de cèl·lules petites destinades a millorar la eficiència d’espectre llicenciat, reproduint escenaris de petit abast típics de la configuració Wi-Fi. IEEE i 3GPP van seguir evolucionant les tecnologies al llarg dels anys: El Wi-Fi s’ha convertit en una tecnologia molt més complexa i sofisticada, incorporant les característiques clau de les tecnologies cel·lulars, com ara HARQ i la reutilització espacial. D’altra banda, des de la versió 13, també s’han dissenyat xarxes cel·lulars per a espectre sense llicència. Com a resultat, els dos darrers capítols d’aquesta tesi es centren en aquests escenaris de convivència, on s’ha de dissenyar LTE per conviure amb la Wi-Fi de manera justa, i NR, l’accés a la radio per a 5G amb Wi-Fi a 5 GHz i WiGig a 60 GHz. A diferència de LTE, que es va adaptar per funcionar en espectre sense llicència, NR-U està dissenyat de forma nativa amb aquesta característica, inclosa la seva capacitat per operar sense llicència de forma autònoma completa, una nova fita fonamental per al mòbil. En aquest context, el nostre focus d’anàlisi canvia. Considerem que aquestes dues famílies de tecnologia ja no estan orientades cap a la complementarietat, sinó que ara competeixen, i afirmem que aquesta serà el tendència per als propers anys. Per permetre la investigació en aquests escenaris multi-RAT, un altre resultat fonamental d’aquesta tesi doctoral, a més de les aportacions científiques, és l’alliberament de models d’alta fidelitat per a LTE i NR i la seva coexistència amb Wi-Fi a la comunitat de codi obert ns-3. ns-3 és un popular simulador de xarxa de codi obert, amb la característica de ser multi-RAT i, per tant, permet l’avaluació de manera natural d’escenaris de convivència entre diferents tecnologies. Aquests models, pels quals he liderat el desenvolupament, són per cites acadèmiques, els models de simulació de codi obert més utilitzats per a LTE i NR i que han rebut finançament de la indústria (Ubiquisys, WFA, SpiderCloud, Interdigital, Facebook) i agències federals (NIST, LLNL) al llarg dels anys.Esta tesis doctoral tiene la característica de extenderse durante mucho tiempo porque mientras trabajaba en ella, trabajaba como ingeniera de investigación en CTTC con tareas de desarrollo muy exigentes. Esto ha retrasado el depósito más de lo que me hubiera gustado. Por otro lado,
gracias a ello, he tenido el privilegio de presenciar y estudiar como las tecnologías inalámbricas
han evolucionado durante una década, de 4G a 5G y más allá.
Cuando comencé mi tesis doctoral, IEEE y 3GPP estaban definiendo las dos principales
tecnologías inalámbricas en ese momento, Wi-Fi y LTE, cumpliendo dos objetivos de mercado
sustancialmente complementarios. Wi-Fi fue diseñado para funcionar principalmente en
interiores, en un espectro sin licencia, y estaba destinado a ser una tecnología simple y barata.
Su tecnología primaria para la convivencia se basaba en el supuesto en que el espectro en
el que estaba operando era gratis, y así fue diseñado simplemente evitando interferencias a
través del famoso protocolo CSMA/CA. Por otro lado, 3GPP estaba diseñando tecnologías
para espectro con licencia, un tipo de espectro costoso. Como resultado, LTE está diseñado
para aprovechar el espectro al máximo proporcionando al mismo tiempo el mejor QoE en
escenarios principalmente al aire libre.
La tesis doctoral parte de este contexto y evoluciona con estas dos tecnologías. En los
primeros capítulos, estudiamos las soluciones de gestión de recursos de radio para operación
en espectro Wi-Fi sin licencia y LTE con licencia. Anticipamos la tendencia ahora fundamental
de aprendizaje automático trabajando en soluciones de gestión de recursos de radio para
mejorar LTE y funcionamiento deWi-Fi en su respectivo espectro. Prestamos especial atención
a las implementaciones de células pequeñas destinadas a mejorar la eficiencia de espectro
licenciado, reproduciendo los típicos escenarios de rango pequeño de la configuración Wi-Fi.
IEEE y 3GPP siguieron evolucionando las tecnologías a lo largo de los años: Wi-Fi
se ha convertido en una tecnología mucho más compleja y sofisticada, incorporando las
características clave de las tecnologías celulares, como HARQ, OFDMA, MU-MIMO, MAC
scheduling y la reutilización espacial. Por otro lado, desde la Release 13, también se han
diseñado redes celulares para espectro sin licencia. Como resultado, los dos últimos capítulos
de esta tesis se centran en estos escenarios de convivencia, donde LTE debe diseñarse para
coexistir con Wi-Fi de manera justa, y NR, el acceso por radio para 5G con Wi-Fi en 5 GHz
y WiGig en 60 GHz. A diferencia de LTE, que se adaptó para operar en espectro sin licencia,
NR-U está diseñado de forma nativa con esta función, incluyendo su capacidad para operar
sin licencia de forma completamente independiente, un nuevo hito fundamental para los
celulares. En este contexto, cambia nuestro enfoque de análisis. Consideramos que estas dos
familias tecnológicas ya no tienen como objetivo la complementariedad, sino que ahora están
compitiendo, y afirmamos que esta será la tendencia para los próximos años.
Para permitir la investigación en estos escenarios de múltiples RAT, otro resultado fundamental
de esta tesis doctoral, además de los aportes científicos, es el lanzamiento de modelos de alta
fidelidad para LTE y NR y su coexistencia con Wi-Fi y WiGig a la comunidad de código
abierto de ns-3. ns-3 es un simulador popular de red de código abierto, con la característica
de ser multi-RAT y así, naturalmente, permite la evaluación de escenarios de convivencia
entre diferentes tecnologías. Estos modelos, para los cuales lideré el desarrollo, son por citas
académicas, los modelos de simulación de código abierto más utilizados para LTE y NR y
han recibido fondos de la industria (Ubiquisys, WFA, SpiderCloud, Interdigital, Facebook) y
agencias federales (NIST, LLNL) a lo largo de los años.Postprint (published version
- …