1,606 research outputs found

    Design and Experimental Validation of a Software-Defined Radio Access Network Testbed with Slicing Support

    Get PDF
    Network slicing is a fundamental feature of 5G systems to partition a single network into a number of segregated logical networks, each optimized for a particular type of service, or dedicated to a particular customer or application. The realization of network slicing is particularly challenging in the Radio Access Network (RAN) part, where multiple slices can be multiplexed over the same radio channel and Radio Resource Management (RRM) functions shall be used to split the cell radio resources and achieve the expected behaviour per slice. In this context, this paper describes the key design and implementation aspects of a Software-Defined RAN (SD-RAN) experimental testbed with slicing support. The testbed has been designed consistently with the slicing capabilities and related management framework established by 3GPP in Release 15. The testbed is used to demonstrate the provisioning of RAN slices (e.g. preparation, commissioning and activation phases) and the operation of the implemented RRM functionality for slice-aware admission control and scheduling

    Quality of experience aware adaptive hypermedia system

    Get PDF
    The research reported in this thesis proposes, designs and tests a novel Quality of Experience Layer (QoE-layer) for the classic Adaptive Hypermedia Systems (AHS) architecture. Its goal is to improve the end-user perceived Quality of Service in different operational environments suitable for residential users. While the AHS’ main role of delivering personalised content is not altered, its functionality and performance is improved and thus the user satisfaction with the service provided. The QoE Layer takes into account multiple factors that affect Quality of Experience (QoE), such as Web components and network connection. It uses a novel Perceived Performance Model that takes into consideration a variety of performance metrics, in order to learn about the Web user operational environment characteristics, about changes in network connection and the consequences of these changes on the user’s quality of experience. This model also considers the user’s subjective opinion about his/her QoE, increasing its effectiveness and suggests strategies for tailoring Web content in order to improve QoE. The user related information is modelled using a stereotype-based technique that makes use of probability and distribution theory. The QoE-Layer has been assessed through both simulations and qualitative evaluation in the educational area (mainly distance learning), when users interact with the system in a low bit rate operational environment. The simulations have assessed “learning” and “adaptability” behaviour of the proposed layer in different and variable home connections when a learning task is performed. The correctness of Perceived Performance Model (PPM) suggestions, access time of the learning process and quantity of transmitted data were analysed. The results show that the QoE layer significantly improves the performance in terms of the access time of the learning process with a reduction in the quantity of data sent by using image compression and/or elimination. A visual quality assessment confirmed that this image quality reduction does not significantly affect the viewers’ perceived quality that was close to “good” perceptual level. For qualitative evaluation the QoE layer has been deployed on the open-source AHA! system. The goal of this evaluation was to compare the learning outcome, system usability and user satisfaction when AHA! and QoE-ware AHA systems were used. The assessment was performed in terms of learner achievement, learning performance and usability assessment. The results indicate that QoE-aware AHA system did not affect the learning outcome (the students have similar-learning achievements) but the learning performance was improved in terms of study time. Most significantly, QoE-aware AHA provides an important improvement in system usability as indicated by users’ opinion about their satisfaction related to QoE

    AUGURES : profit-aware web infrastructure management

    Get PDF
    Over the last decade, advances in technology together with the increasing use of the Internet for everyday tasks, are causing profound changes in end-users, as well as in businesses and technology providers. The widespread adoption of high-speed and ubiquitous Internet access, is also changing the way users interact with Web applications and their expectations in terms of Quality-of-Service (QoS) and User eXperience (UX). Recently, Cloud computing has been rapidly adopted to host and manage Web applications, due to its inherent cost effectiveness and on-demand scaling of infrastructures. However, system administrators still need to make manual decisions about the parameters that affect the business results of their applications ie., setting QoS targets and defining metrics for scaling the number of servers during the day. Therefore, understanding the workload and user behavior ¿the demand, poses new challenges for capacity planning and scalability ¿the supply, and ultimately for the success of a Web site. This thesis contributes to the current state-of-art of Web infrastructure management by providing: i) a methodology for predicting Web session revenue; ii) a methodology to determine high response time effect on sales; and iii) a policy for profit-aware resource management, that relates server capacity, to QoS, and sales. The approach leverages Machine Learning (ML) techniques on custom, real-life datasets from an Ecommerce retailer featuring popular Web applications. Where the experimentation shows how user behavior and server performance models can be built from offline information, to determine how demand and supply relations work as resources are consumed. Producing in this way, economical metrics that are consumed by profit-aware policies, that allow the self-configuration of cloud infrastructures to an optimal number of servers under a variety of conditions. While at the same time, the thesis, provides several insights applicable for improving Autonomic infrastructure management and the profitability of Ecommerce applications.Durante la última década, avances en tecnología junto al incremento de uso de Internet, están causando cambios en los usuarios finales, así como también a las empresas y proveedores de tecnología. La adopción masiva del acceso ubicuo a Internet de alta velocidad, crea cambios en la forma de interacción con las aplicaciones Web y en las expectativas de los usuarios en relación de calidad de servicio (QoS) y experiencia de usuario (UX) ofrecidas. Recientemente, el modelo de computación Cloud ha sido adoptado rápidamente para albergar y gestionar aplicaciones Web, debido a su inherente efectividad en costos y servidores bajo demanda. Sin embargo, los administradores de sistema aún tienen que tomar decisiones manuales con respecto a los parámetros de ejecución que afectan a los resultados de negocio p.ej. definir objetivos de QoS y métricas para escalar en número de servidores. Por estos motivos, entender la carga y el comportamiento de usuario (la demanda), pone nuevos desafíos a la planificación de capacidad y escalabilidad (el suministro), y finalmente el éxito de un sitio Web.Esta tesis contribuye al estado del arte actual en gestión de infraestructuras Web presentado: i) una metodología para predecir los beneficios de una sesión Web; ii) una metodología para determinar el efecto de tiempos de respuesta altos en las ventas; y iii) una política para la gestión de recursos basada en beneficios, al relacionar la capacidad de los servidores, QoS, y ventas. La propuesta se basa en aplicar técnicas Machine Learning (ML) a fuentes de datos de producción de un proveedor de Ecommerce, que ofrece aplicaciones Web populares. Donde los experimentos realizados muestran cómo modelos de comportamiento de usuario y de rendimiento de servidor pueden obtenerse de datos históricos; con el fin de determinar la relación entre la demanda y el suministro, según se utilizan los recursos. Produciendo así, métricas económicas que son luego aplicadas en políticas basadas en beneficios, para permitir la auto-configuración de infraestructuras Cloud a un número adecuado de servidores. Mientras que al mismo tiempo, la tesis provee información relevante para mejorar la gestión de infraestructuras Web de forma autónoma y aumentar los beneficios en aplicaciones de Ecommerce

    A Runtime Verification and Validation Framework for Self-Adaptive Software

    Get PDF
    The concepts that make self-adaptive software attractive also make it more difficult for users to gain confidence that these systems will consistently meet their goals under uncertain context. To improve user confidence in self-adaptive behavior, machine-readable conceptual models have been developed to instrument the adaption behavior of the target software system and primary feedback loop. By comparing these machine-readable models to the self-adaptive system, runtime verification and validation may be introduced as another method to increase confidence in self-adaptive systems; however, the existing conceptual models do not provide the semantics needed to institute this runtime verification or validation. This research confirms that the introduction of runtime verification and validation for self-adaptive systems requires the expansion of existing conceptual models with quality of service metrics, a hierarchy of goals, and states with temporal transitions. Based on this expanded semantics, runtime verification and validation was introduced as a second-level feedback loop to improve the performance of the primary feedback loop and quantitatively measure the quality of service achieved in a state-based, self-adaptive system. A web-based purchasing application running in a cloud-based environment was the focus of experimentation. In order to meet changing customer purchasing demand, the self-adaptive system monitored external context changes and increased or decreased available application servers. The runtime verification and validation system operated as a second-level feedback loop to monitor quality of service goals based on internal context, and corrected self-adaptive behavior when goals are violated. Two competing quality of service goals were introduced to maintain customer satisfaction while minimizing cost. The research demonstrated that the addition of a second-level runtime verification and validation feedback loop did quantitatively improve self-adaptive system performance even with simple, static monitoring rules

    Automated test-based learning and verification of performance models for microservices systems

    Get PDF
    Effective and automated verification techniques able to provide assurances of performance and scalability are highly demanded in the context of microservices systems. In this paper, we introduce a methodology that applies specification-driven load testing to learn the behavior of the target microservices system under multiple deployment configurations. Testing is driven by realistic workload conditions sampled in production. The sampling produces a formal description of the users' behavior through a Discrete Time Markov Chain. This model drives multiple load testing sessions that query the system under test and feed a Bayesian inference process which incrementally refines the initial model to obtain a complete specification from run-time evidence as a Continuous Time Markov Chain. The complete specification is then used to conduct automated verification by using probabilistic model checking and to compute a configuration score that evaluates alternative deployment options. This paper introduces the methodology, its theoretical foundation, and the toolchain we developed to automate it. Our empirical evaluation shows its applicability, benefits, and costs on a representative microservices system benchmark. We show that the methodology detects performance issues, traces them back to system-level requirements, and, thanks to the configuration score, provides engineers with insights on deployment options. The comparison between our approach and a selected state-of-the-art baseline shows that we are able to reduce the cost up to 73% in terms of number of tests. The verification stage requires negligible execution time and memory consumption. We observed that the verification of 360 system-level requirements took ~1 minute by consuming at most 34 KB. The computation of the score involved the verification of ~7k (automatically generated) properties verified in ~72 seconds using at most ~50 KB. (C)& nbsp;2022 The Author(s). Published by Elsevier Inc.& nbsp

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Fourth ERCIM workshop on e-mobility

    Get PDF
    corecore