526 research outputs found

    Resource Allocation in 4G and 5G Networks: A Review

    Get PDF
    The advent of 4G and 5G broadband wireless networks brings several challenges with respect to resource allocation in the networks. In an interconnected network of wireless devices, users, and devices, all compete for scarce resources which further emphasizes the fair and efficient allocation of those resources for the proper functioning of the networks. The purpose of this study is to discover the different factors that are involved in resource allocation in 4G and 5G networks. The methodology used was an empirical study using qualitative techniques by performing literature reviews on the state of art in 4G and 5G networks, analyze their respective architectures and resource allocation mechanisms, discover parameters, criteria and provide recommendations. It was observed that resource allocation is primarily done with radio resource in 4G and 5G networks, owing to their wireless nature, and resource allocation is measured in terms of delay, fairness, packet loss ratio, spectral efficiency, and throughput. Minimal consideration is given to other resources along the end-to-end 4G and 5G network architectures. This paper defines more types of resources, such as electrical energy, processor cycles and memory space, along end-to-end architectures, whose allocation processes need to be emphasized owing to the inclusion of software defined networking and network function virtualization in 5G network architectures. Thus, more criteria, such as electrical energy usage, processor cycle, and memory to evaluate resource allocation have been proposed.  Finally, ten recommendations have been made to enhance resource allocation along the whole 5G network architecture

    Enhancing User Experience by Extracting Application Intelligence from Network Traffic

    Full text link
    Internet Service Providers (ISPs) continue to get complaints from users on poor experience for diverse Internet applications ranging from video streaming and gaming to social media and teleconferencing. Identifying and rectifying the root cause of these experience events requires the ISP to know more than just coarse-grained measures like link utilizations and packet losses. Application classification and experience measurement using traditional deep packet inspection (DPI) techniques is starting to fail with the increasing adoption of traffic encryption and is not cost-effective with the explosive growth in traffic rates. This thesis leverages the emerging paradigms of machine learning and programmable networks to design and develop systems that can deliver application-level intelligence to ISPs at scale, cost, and accuracy that has hitherto not been achieved before. This thesis makes four new contributions. Our first contribution develops a novel transformer-based neural network model that classifies applications based on their traffic shape, agnostic to encryption. We show that this approach has over 97% f1-score for diverse application classes such as video streaming and gaming. Our second contribution builds and validates algorithmic and machine learning models to estimate user experience metrics for on-demand and live video streaming applications such as bitrate, resolution, buffer states, and stalls. For our third contribution, we analyse ten popular latency-sensitive online multiplayer games and develop data structures and algorithms to rapidly and accurately detect each game using automatically generated signatures. By combining this with active latency measurement and geolocation analysis of the game servers, we help ISPs determine better routing paths to reduce game latency. Our fourth and final contribution develops a prototype of a self-driving network that autonomously intervenes just-in-time to alleviate the suffering of applications that are being impacted by transient congestion. We design and build a complete system that extracts application-aware network telemetry from programmable switches and dynamically adapts the QoS policies to manage the bottleneck resources in an application-fair manner. We show that it outperforms known queue management techniques in various traffic scenarios. Taken together, our contributions allow ISPs to measure and tune their networks in an application-aware manner to offer their users the best possible experience

    Modelo de correlación QoS-QoE en un ambiente de aprovisionamiento de servicio de telecomunicaciones OTT-Telco

    Get PDF
    ANTECEDENTES El aprovisionamiento de la Calidad de la Experiencia (QoE) en servicios de telecomunicaciones requiere de sistemas de gestión que permitan monitorizar y controlar la QoE de los usuarios luego de consumir diferentes servicios de internet provistos sobre la red del operador. En efecto, el consumo elevado de datos por parte de los usuarios requiere, a nivel de gestión de la red, la asignación de recursos suficientes para el correcto funcionamiento de los servicios. En particular, la configuración de la Calidad del Servicio (QoS) ofrecida por el operador dentro de su dominio de operación se torna fundamental para proveer un tratamiento apropiado del tráfico, permitiendo que la percepción de la calidad del servicio por parte de los usuarios finales pueda mantenerse dentro del umbral de tolerancia de acuerdo con las políticas establecidas por la compañía de telecomunicaciones (Telco). En consecuencia, un modelo de correlación QoS-QoE es clave en el aprovisionamiento de servicios de internet sobre la infraestructura del operador de telecomunicaciones. OBJETIVOS La presente tesis de doctorado se centra en proponer un modelo de correlación QoS-QoE en un ambiente de aprovisionamiento de servicios de telecomunicaciones OTT-Telco. Para ello, cinco acciones generales deben llevarse a cabo; a saber: () caracterizar los parámetros de QoS que mayor efecto tienen en la degradación de servicios OTT. () determinar las características, condiciones, parámetros y medidas de QoE en la prestación de un servicio OTT. () establecer las condiciones y restricciones de prestación de un servicio OTT en la infraestructura de una Telco que mantenga una buena relación QoS-QoE. () desarrollar un mecanismo de estimación o predicción de QoE con base en los factores de influencia de QoS que afectan la prestación de un servicio OTT. () evaluar experimentalmente el modelo de correlación QoE-QoS. MÉTODOS Para el cumplimiento de los objetivos, se definió un modelo integrado por un macro-componente Conceptualización y otro Operacional. El macro-componente Conceptualización está orientado por el referente metodológico para la construcción de marcos conceptuales de Jabareen, y el macro-componente Operacional está alineado con las fases definidas para el desarrollo de proyectos de minería de datos, CRISP-DM. Adicionalmente, se emplearon diseños de comprobación para los algoritmos, con el fin de comprobar la validez del modelo de estimación basado en algoritmos de aprendizaje automático; es decir, el modelo de estimación fue evaluado a partir de un diseño de comprobación donde se definen, para cada uno de los algoritmos, los parámetros iniciales de operación, las configuraciones de las diferentes pruebas, y las métricas usadas para evaluar su desempeño. RESULTADOS Los resultados más importantes alcanzados son los siguientes: un mapa estratégico del estado de la ciencia en el aprovisionamiento de la QoE para servicios OTT, una conceptualización de los perfiles del modelo de correlación, un modelo matemático para la valoración de la QoE de acuerdo con el comportamiento de consumo de los usuarios, un conjunto de datos de tráfico etiquetado que relaciona el comportamiento de la red con la percepción de la calidad de los usuarios, y un modelo de estimación de la QoE de los usuarios a partir del comportamiento de tráfico de la red. CONCLUSIONES El modelo de correlación QoS-QoE puede ser empleado en sistemas gestión de la QoE donde se requiere por parte de la Telco un diagnóstico y monitorización más objetiva de la percepción de la calidad del servicio por parte de sus usuarios dentro su red de aprovisionamiento. De igual manera, el empleo de parámetros adicionales de contexto de usuario enriquecería los sistemas de gestión de la QoE en el aprovisionamiento de servicios OTT.BACKGROUND Quality of Experience (QoE) provisioning requires robust QoE-centric network and application management on Telco network for providing internet services. Indeed, traffic growth over Telco network demands resource allocation for service well performance. Particularly, Quality of Service (QoS) configuration offered by network provider operational domain becomes a key component for traffic control in a proper manner. Hence, the quality of services perceived can be managed within a tolerance threshold according to telecom operator policies. Therefore, a QoS-QoE correlational model for internet services provisioning over the telecom operator infrastructure is required. AIMS The doctoral thesis is focused on propose a correlation QoS-QoE model for provisioning telecommunications services in OTT-Telco context. To this end, five goals must be accomplishing. () To characterize QoS parameters that more impact have on OTT services performance. () To determinate QoE assumptions, features, parameters, and metrics for OTT service provisioning. () To establish the assumptions and restrictions for providing a well QoS-QoE relation in the telecom operator. () To develop an estimation model for QoE based on QoS factors in the OTT services provisioning. () To evaluate the correlation QoS-QoE model. METHODS To accomplish the aims, a model with a Conceptual and Operational macro-component was structured. The Conceptual macro-component is based on the principles for building conceptual frameworks by Jabareen, and an Operational macro-component aligned with data mining project development phases, CRISP-DM. Furthermore, test bed design was structured to validate the estimation model base on machine learning algorithms; namely, algorithms initial parameters, some tests setup, and regression metrics were determined on a test bed for validate the performance of the estimation model proposed RESULTS The most relevant results achieved are the following: a strategic science map in the QoE provisioning for OTT services, three conceptual profiles as part of the correlation QoS-QoE model, a mathematical model for QoE assessment according to user consumption behavior, a label traffic dataset that relates the traffic network with quality of services perception, and estimation QoE model for users based on traffic flows. CONCLUSIONS The QoS-QoE correlational model can be applied in QoE-Driven application and network management in which an objective controlling and monitoring of quality of services perception by users is required. Moreover, additional user context parameters could be taking account for improving the QoE management systems in OTT services provisioning.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Jesús García Herrero.- Secretario: José Armando Ordóñez Córdoba.- Vocal: Juan Carlos Cuéllar Quiñóne

    A study into scalable transport networks for IoT deployment

    Get PDF
    The growth of the internet towards the Internet of Things (IoT) has impacted the way we live. Intelligent (smart) devices which can act autonomously has resulted in new applications for example industrial automation, smart healthcare systems, autonomous transportation to name just a few. These applications have dramatically improved the way we live as citizens. While the internet is continuing to grow at an unprecedented rate, this has also been coupled with the growing demands for new services e.g. machine-to machine (M2M) communications, smart metering etc. Transmission Control Protocol/Internet Protocol (TCP/IP) architecture was developed decades ago and was not prepared nor designed to meet these exponential demands. This has led to the complexity of the internet coupled with its inflexible and a rigid state. The challenges of reliability, scalability, interoperability, inflexibility and vendor lock-in amongst the many challenges still remain a concern over the existing (traditional) networks. In this study, an evolutionary approach into implementing a "Scalable IoT Data Transmission Network" (S-IoT-N) is proposed while leveraging on existing transport networks. Most Importantly, the proposed evolutionary approach attempts to address the above challenges by using open (existing) standards and by leveraging on the (traditional/existing) transport networks. The Proof-of-Concept (PoC) of the proposed S-IoT-N is attempted on a physical network testbed and is demonstrated along with basic network connectivity services over it. Finally, the results are validated by an experimental performance evaluation of the PoC physical network testbed along with the recommendations for improvement and future work

    Performance Evaluation And Anomaly detection in Mobile BroadBand Across Europe

    Get PDF
    With the rapidly growing market for smartphones and user’s confidence for immediate access to high-quality multimedia content, the delivery of video over wireless networks has become a big challenge. It makes it challenging to accommodate end-users with flawless quality of service. The growth of the smartphone market goes hand in hand with the development of the Internet, in which current transport protocols are being re-evaluated to deal with traffic growth. QUIC and WebRTC are new and evolving standards. The latter is a unique and evolving standard explicitly developed to meet this demand and enable a high-quality experience for mobile users of real-time communication services. QUIC has been designed to reduce Web latency, integrate security features, and allow a highquality experience for mobile users. Thus, the need to evaluate the performance of these rising protocols in a non-systematic environment is essential to understand the behavior of the network and provide the end user with a better multimedia delivery service. Since most of the work in the research community is conducted in a controlled environment, we leverage the MONROE platform to investigate the performance of QUIC and WebRTC in real cellular networks using static and mobile nodes. During this Thesis, we conduct measurements ofWebRTC and QUIC while making their data-sets public to the interested experimenter. Building such data-sets is very welcomed with the research community, opening doors to applying data science to network data-sets. The development part of the experiments involves building Docker containers that act as QUIC and WebRTC clients. These containers are publicly available to be used candidly or within the MONROE platform. These key contributions span from Chapter 4 to Chapter 5 presented in Part II of the Thesis. We exploit data collection from MONROE to apply data science over network data-sets, which will help identify networking problems shifting the Thesis focus from performance evaluation to a data science problem. Indeed, the second part of the Thesis focuses on interpretable data science. Identifying network problems leveraging Machine Learning (ML) has gained much visibility in the past few years, resulting in dramatically improved cellular network services. However, critical tasks like troubleshooting cellular networks are still performed manually by experts who monitor the network around the clock. In this context, this Thesis contributes by proposing the use of simple interpretable ML algorithms, moving away from the current trend of high-accuracy ML algorithms (e.g., deep learning) that do not allow interpretation (and hence understanding) of their outcome. We prefer having lower accuracy since we consider it interesting (anomalous) the scenarios misclassified by the ML algorithms, and we do not want to miss them by overfitting. To this aim, we present CIAN (from Causality Inference of Anomalies in Networks), a practical and interpretable ML methodology, which we implement in the form of a software tool named TTrees (from Troubleshooting Trees) and compare it to a supervised counterpart, named STress (from Supervised Trees). Both methodologies require small volumes of data and are quick at training. Our experiments using real data from operational commercial mobile networks e.g., sampled with MONROE probes, show that STrees and CIAN can automatically identify and accurately classify network anomalies—e.g., cases for which a low network performance is not justified by operational conditions—training with just a few hundreds of data samples, hence enabling precise troubleshooting actions. Most importantly, our experiments show that a fully automated unsupervised approach is viable and efficient. In Part III of the Thesis which includes Chapter 6 and 7. In conclusion, in this Thesis, we go through a data-driven networking roller coaster, from performance evaluating upcoming network protocols in real mobile networks to building methodologies that help identify and classify the root cause of networking problems, emphasizing the fact that these methodologies are easy to implement and can be deployed in production environments.This work has been supported by IMDEA Networks InstitutePrograma de Doctorado en Multimedia y Comunicaciones por la Universidad Carlos III de Madrid y la Universidad Rey Juan CarlosPresidente: Matteo Sereno.- Secretario: Antonio de la Oliva Delgado.- Vocal: Raquel Barco Moren

    Data Driven Network Design for Cloud Services Based on Historic Utilization

    Get PDF
    In recent years we have seen a shift from traditional networking in enterprises with Data Center centric architectures moving to cloud services. Companies are moving away from private networking technologies like MPLS as they migrate their application workloads to the cloud. With these migrations, network architects must struggle with how to design and build new network infrastructure to support the cloud for all their end users including office workers, remote workers, and home office workers. The main goal for network design is to maximize availability and performance and minimize cost. However, network architects and network engineers tend to over provision networks by sizing the bandwidth for worst case scenarios wasting millions of dollars per year. This thesis will analyze traditional network utilization data from twenty-five of the Fortune 500 companies in the United States and determine the most efficient bandwidth to support cloud services from providers like Amazon, Microsoft, Google, and others. The analysis of real-world data and the resulting proposed scaling factor is an original contribution from this study

    Video Conference as a tool for Higher Education

    Get PDF
    The book describes the activities of the consortium member institutions in the framework of the TEMPUS IV Joint Project ViCES - Video Conferencing Educational Services (144650-TEMPUS-2008-IT-JPGR). In order to provide the basis for the development of a distance learning environment based on video conferencing systems and develop a blended learning courses methodology, the TEMPUS Project VICES (2009-2012) was launched in 2009. This publication collects the conclusion of the project and it reports the main outcomes together with the approach followed by the different partners towards the achievement of the project's goal. The book includes several contributions focussed on specific topics related to videoconferencing services, namely how to enable such services in educational contexts so that, the installation and deployment of videoconferencing systems could be conceived an integral part of virtual open campuses

    An intent-based blockchain-agnostic interaction environment

    Full text link

    Cybersecurity and the Digital Health: An Investigation on the State of the Art and the Position of the Actors

    Get PDF
    Cybercrime is increasingly exposing the health domain to growing risk. The push towards a strong connection of citizens to health services, through digitalization, has undisputed advantages. Digital health allows remote care, the use of medical devices with a high mechatronic and IT content with strong automation, and a large interconnection of hospital networks with an increasingly effective exchange of data. However, all this requires a great cybersecurity commitment—a commitment that must start with scholars in research and then reach the stakeholders. New devices and technological solutions are increasingly breaking into healthcare, and are able to change the processes of interaction in the health domain. This requires cybersecurity to become a vital part of patient safety through changes in human behaviour, technology, and processes, as part of a complete solution. All professionals involved in cybersecurity in the health domain were invited to contribute with their experiences. This book contains contributions from various experts and different fields. Aspects of cybersecurity in healthcare relating to technological advance and emerging risks were addressed. The new boundaries of this field and the impact of COVID-19 on some sectors, such as mhealth, have also been addressed. We dedicate the book to all those with different roles involved in cybersecurity in the health domain
    corecore