33 research outputs found

    Energy-efficient Transitional Near-* Computing

    Get PDF
    Studies have shown that communication networks, devices accessing the Internet, and data centers account for 4.6% of the worldwide electricity consumption. Although data centers, core network equipment, and mobile devices are getting more energy-efficient, the amount of data that is being processed, transferred, and stored is vastly increasing. Recent computer paradigms, such as fog and edge computing, try to improve this situation by processing data near the user, the network, the devices, and the data itself. In this thesis, these trends are summarized under the new term near-* or near-everything computing. Furthermore, a novel paradigm designed to increase the energy efficiency of near-* computing is proposed: transitional computing. It transfers multi-mechanism transitions, a recently developed paradigm for a highly adaptable future Internet, from the field of communication systems to computing systems. Moreover, three types of novel transitions are introduced to achieve gains in energy efficiency in near-* environments, spanning from private Infrastructure-as-a-Service (IaaS) clouds, Software-defined Wireless Networks (SDWNs) at the edge of the network, Disruption-Tolerant Information-Centric Networks (DTN-ICNs) involving mobile devices, sensors, edge devices as well as programmable components on a mobile System-on-a-Chip (SoC). Finally, the novel idea of transitional near-* computing for emergency response applications is presented to assist rescuers and affected persons during an emergency event or a disaster, although connections to cloud services and social networks might be disturbed by network outages, and network bandwidth and battery power of mobile devices might be limited

    Energy Efficient Big Data Networks

    Get PDF
    The continuous increase of big data applications in number and types creates new challenges that should be tackled by the green ICT community. Data scientists classify big data into four main categories (4Vs): Volume (with direct implications on power needs), Velocity (with impact on delay requirements), Variety (with varying CPU requirements and reduction ratios after processing) and Veracity (with cleansing and backup constraints). Each V poses many challenges that confront the energy efficiency of the underlying networks carrying big data traffic. In this work, we investigated the impact of the big data 4Vs on energy efficient bypass IP over WDM networks. The investigation is carried out by developing Mixed Integer Linear Programming (MILP) models that encapsulate the distinctive features of each V. In our analyses, the big data network is greened by progressively processing big data raw traffic at strategic locations, dubbed as processing nodes (PNs), built in the network along the path from big data sources to the data centres. At each PN, raw data is processed and lower rate useful information is extracted progressively, eventually reducing the network power consumption. For each V, we conducted an in-depth analysis and evaluated the network power saving that can be achieved by the energy efficient big data network compared to the classical approach. Along the volume dimension of big data, the work dealt with optimally handling and processing an enormous amount of big data Chunks and extracting the corresponding knowledge carried by those Chunks, transmitting knowledge instead of data, thus reducing the data volume and saving power. Variety means that there are different types of big data such as CPU intensive, memory intensive, Input/output (IO) intensive, CPU-Memory intensive, CPU/IO intensive, and memory-IO intensive applications. Each type requires a different amount of processing, memory, storage, and networking resources. The processing of different varieties of big data was optimised with the goal of minimising power consumption. In the velocity dimension, we classified the processing velocity of big data into two modes: expedited-data processing mode and relaxed-data processing mode. Expedited-data demanded higher amount of computational resources to reduce the execution time compared to the relaxed-data. The big data processing and transmission were optimised given the velocity dimension to reduce power consumption. Veracity specifies trustworthiness, data protection, data backup, and data cleansing constraints. We considered the implementation of data cleansing and backup operations prior to big data processing so that big data is cleansed and readied for entering big data analytics stage. The analysis was carried out through dedicated scenarios considering the influence of each V’s characteristic parameters. For the set of network parameters we considered, our results for network energy efficiency under the impact of volume, variety, velocity and veracity scenarios revealed that up to 52%, 47%, 60%, 58%, network power savings can be achieved by the energy efficient big data networks approach compared to the classical approach, respectively

    Resource Management in Softwarized Networks

    Get PDF
    Communication networks are undergoing a major transformation through softwarization, which is changing the way networks are designed, operated, and managed. Network Softwarization is an emerging paradigm where software controls the treatment of network flows, adds value to these flows by software processing, and orchestrates the on-demand creation of customized networks to meet the needs of customer applications. Software-Defined Networking (SDN), Network Function Virtualization (NFV), and Network Virtualization are three cornerstones of the overall transformation trend toward network softwarization. Together, they are empowering network operators to accelerate time-to-market for new services, diversify the supply chain for networking hardware and software, bringing the benefits of agility, economies of scale, and flexibility of cloud computing to networks. The enhanced programmability enabled by softwarization creates unique opportunities for adapting network resources in support of applications and users with diverse requirements. To effectively leverage the flexibility provided by softwarization and realize its full potential, it is of paramount importance to devise proper mechanisms for allocating resources to different applications and users and for monitoring their usage over time. The overarching goal of this dissertation is to advance state-of-the-art in how resources are allocated and monitored and build the foundation for effective resource management in softwarized networks. Specifically, we address four resource management challenges in three key enablers of network softwarization, namely SDN, NFV, and network virtualization. First, we challenge the current practice of realizing network services with monolithic software network functions and propose a microservice-based disaggregated architecture enabling finer-grained resource allocation and scaling. Then, we devise optimal solutions and scalable heuristics for establishing virtual networks with guaranteed bandwidth and guaranteed survivability against failure on multi-layer IP-over-Optical and single-layer IP substrate network, respectively. Finally, we propose adaptive sampling mechanisms for balancing the overhead of softwarized network monitoring and the accuracy of the network view constructed from monitoring data

    Dynamic data placement and discovery in wide-area networks

    Get PDF
    The workloads of online services and applications such as social networks, sensor data platforms and web search engines have become increasingly global and dynamic, setting new challenges to providing users with low latency access to data. To achieve this, these services typically leverage a multi-site wide-area networked infrastructure. Data access latency in such an infrastructure depends on the network paths between users and data, which is determined by the data placement and discovery strategies. Current strategies are static, which offer low latencies upon deployment but worse performance under a dynamic workload. We propose dynamic data placement and discovery strategies for wide-area networked infrastructures, which adapt to the data access workload. We achieve this with data activity correlation (DAC), an application-agnostic approach for determining the correlations between data items based on access pattern similarities. By dynamically clustering data according to DAC, network traffic in clusters is kept local. We utilise DAC as a key component in reducing access latencies for two application scenarios, emphasising different aspects of the problem: The first scenario assumes the fixed placement of data at sites, and thus focusses on data discovery. This is the case for a global sensor discovery platform, which aims to provide low latency discovery of sensor metadata. We present a self-organising hierarchical infrastructure consisting of multiple DAC clusters, maintained with an online and distributed split-and-merge algorithm. This reduces the number of sites visited, and thus latency, during discovery for a variety of workloads. The second scenario focusses on data placement. This is the case for global online services that leverage a multi-data centre deployment to provide users with low latency access to data. We present a geo-dynamic partitioning middleware, which maintains DAC clusters with an online elastic partition algorithm. It supports the geo-aware placement of partitions across data centres according to the workload. This provides globally distributed users with low latency access to data for static and dynamic workloads.Open Acces

    Contributions to energy-aware demand-response systems using SDN and NFV for fog computing

    Get PDF
    Ever-increasing energy consumption, the depletion of non-renewable resources, the climate impact associated with energy generation, and finite energy-production capacity are important concerns worldwide that drive the urgent creation of new energy management and consumption schemes. In this regard, by leveraging the massive connectivity provided by emerging communications such as the 5G systems, this thesis proposes a long-term sustainable Demand-Response solution for the adaptive and efficient management of available energy consumption for Internet of Things (IoT) infrastructures, in which energy utilization is optimized based on the available supply. In the proposed approach, energy management focuses on consumer devices (e.g., appliances such as a light bulb or a screen). In this regard, by proposing that each consumer device be part of an IoT infrastructure, it is feasible to control its respective consumption. The proposal includes an architecture that uses Network Functions Virtualization (NFV) and Software Defined Networking technologies as enablers to promote the primary use of energy from renewable sources. Associated with architecture, this thesis presents a novel consumption model conditioned on availability in which consumers are part of the management process. To efficiently use the energy from renewable and non-renewable sources, several management strategies are herein proposed, such as the prioritization of the energy supply, workload scheduling using time-shifting capabilities, and quality degradation to decrease- the power demanded by consumers if needed. The adaptive energy management solution is modeled as an Integer Linear Programming, and its complexity has been identified to be NP-Hard. To verify the improvements in energy utilization, an optimal algorithmic solution based on a brute force search has been implemented and evaluated. Because the hardness of the adaptive energy management problem and the non-polynomial growth of its optimal solution, which is limited to energy management for a small number of energy demands (e.g., 10 energy demands) and small values of management mechanisms, several faster suboptimal algorithmic strategies have been proposed and implemented. In this context, at the first stage, we implemented three heuristic strategies: a greedy strategy (GreedyTs), a genetic-algorithm-based solution (GATs), and a dynamic programming approach (DPTs). Then, we incorporated into both the optimal and heuristic strategies a prepartitioning method in which the total set of analyzed services is divided into subsets of smaller size and complexity that are solved iteratively. As a result of the adaptive energy management in this thesis, we present eight strategies, one timal and seven heuristic, that when deployed in communications infrastructures such as the NFV domain, seek the best possible scheduling of demands, which lead to efficient energy utilization. The performance of the algorithmic strategies has been validated through extensive simulations in several scenarios, demonstrating improvements in energy consumption and the processing of energy demands. Additionally, the simulation results revealed that the heuristic approaches produce high-quality solutions close to the optimal while executing among two and seven orders of magnitude faster and with applicability to scenarios with thousands and hundreds of thousands of energy demands. This thesis also explores possible application scenarios of both the proposed architecture for adaptive energy management and algorithmic strategies. In this regard, we present some examples, including adaptive energy management in-home systems and 5G networks slicing, energy-aware management solutions for unmanned aerial vehicles, also known as drones, and applicability for the efficient allocation of spectrum in flex-grid optical networks. Finally, this thesis presents open research problems and discusses other application scenarios and future work.El constante aumento del consumo de energía, el agotamiento de los recursos no renovables, el impacto climático asociado con la generación de energía y la capacidad finita de producción de energía son preocupaciones importantes en todo el mundo que impulsan la creación urgente de nuevos esquemas de consumo y gestión de energía. Al aprovechar la conectividad masiva que brindan las comunicaciones emergentes como los sistemas 5G, esta tesis propone una solución de Respuesta a la Demanda sostenible a largo plazo para la gestión adaptativa y eficiente del consumo de energía disponible para las infraestructuras de Internet of Things (IoT), en el que se optimiza la utilización de la energía en función del suministro disponible. En el enfoque propuesto, la gestión de la energía se centra en los dispositivos de consumo (por ejemplo, electrodomésticos). En este sentido, al proponer que cada dispositivo de consumo sea parte de una infraestructura IoT, es factible controlar su respectivo consumo. La propuesta incluye una arquitectura que utiliza tecnologías de Network Functions Virtualization (NFV) y Software Defined Networking como habilitadores para promover el uso principal de energía de fuentes renovables. Asociada a la arquitectura, esta tesis presenta un modelo de consumo condicionado a la disponibilidad en el que los consumidores son parte del proceso de gestión. Para utilizar eficientemente la energía de fuentes renovables y no renovables, se proponen varias estrategias de gestión, como la priorización del suministro de energía, la programación de la carga de trabajo utilizando capacidades de cambio de tiempo y la degradación de la calidad para disminuir la potencia demandada. La solución de gestión de energía adaptativa se modela como un problema de programación lineal entera con complejidad NP-Hard. Para verificar las mejoras en la utilización de energía, se ha implementado y evaluado una solución algorítmica óptima basada en una búsqueda de fuerza bruta. Debido a la dureza del problema de gestión de energía adaptativa y el crecimiento no polinomial de su solución óptima, que se limita a la gestión de energía para un pequeño número de demandas de energía (por ejemplo, 10 demandas) y pequeños valores de los mecanismos de gestión, varias estrategias algorítmicas subóptimos más rápidos se han propuesto. En este contexto, en la primera etapa, implementamos tres estrategias heurísticas: una estrategia codiciosa (GreedyTs), una solución basada en algoritmos genéticos (GATs) y un enfoque de programación dinámica (DPTs). Luego, incorporamos tanto en la estrategia óptima como en la- heurística un método de prepartición en el que el conjunto total de servicios analizados se divide en subconjuntos de menor tamaño y complejidad que se resuelven iterativamente. Como resultado de la gestión adaptativa de la energía en esta tesis, presentamos ocho estrategias, una óptima y siete heurísticas, que cuando se despliegan en infraestructuras de comunicaciones como el dominio NFV, buscan la mejor programación posible de las demandas, que conduzcan a un uso eficiente de la energía. El desempeño de las estrategias algorítmicas ha sido validado a través de extensas simulaciones en varios escenarios, demostrando mejoras en el consumo de energía y el procesamiento de las demandas de energía. Los resultados de la simulación revelaron que los enfoques heurísticos producen soluciones de alta calidad cercanas a las óptimas mientras se ejecutan entre dos y siete órdenes de magnitud más rápido y con aplicabilidad a escenarios con miles y cientos de miles de demandas de energía. Esta tesis también explora posibles escenarios de aplicación tanto de la arquitectura propuesta para la gestión adaptativa de la energía como de las estrategias algorítmicas. En este sentido, presentamos algunos ejemplos, que incluyen sistemas de gestión de energía adaptativa en el hogar, en 5G networkPostprint (published version

    Cloud-computing strategies for sustainable ICT utilization : a decision-making framework for non-expert Smart Building managers

    Get PDF
    Virtualization of processing power, storage, and networking applications via cloud-computing allows Smart Buildings to operate heavy demand computing resources off-premises. While this approach reduces in-house costs and energy use, recent case-studies have highlighted complexities in decision-making processes associated with implementing the concept of cloud-computing. This complexity is due to the rapid evolution of these technologies without standardization of approach by those organizations offering cloud-computing provision as a commercial concern. This study defines the term Smart Building as an ICT environment where a degree of system integration is accomplished. Non-expert managers are highlighted as key users of the outcomes from this project given the diverse nature of Smart Buildings’ operational objectives. This research evaluates different ICT management methods to effectively support decisions made by non-expert clients to deploy different models of cloud-computing services in their Smart Buildings ICT environments. The objective of this study is to reduce the need for costly 3rd party ICT consultancy providers, so non-experts can focus more on their Smart Buildings’ core competencies rather than the complex, expensive, and energy consuming processes of ICT management. The gap identified by this research represents vulnerability for non-expert managers to make effective decisions regarding cloud-computing cost estimation, deployment assessment, associated power consumption, and management flexibility in their Smart Buildings ICT environments. The project analyses cloud-computing decision-making concepts with reference to different Smart Building ICT attributes. In particular, it focuses on a structured programme of data collection which is achieved through semi-structured interviews, cost simulations and risk-analysis surveys. The main output is a theoretical management framework for non-expert decision-makers across variously-operated Smart Buildings. Furthermore, a decision-support tool is designed to enable non-expert managers to identify the extent of virtualization potential by evaluating different implementation options. This is presented to correlate with contract limitations, security challenges, system integration levels, sustainability, and long-term costs. These requirements are explored in contrast to cloud demand changes observed across specified periods. Dependencies were identified to greatly vary depending on numerous organizational aspects such as performance, size, and workload. The study argues that constructing long-term, sustainable, and cost-efficient strategies for any cloud deployment, depends on the thorough identification of required services off and on-premises. It points out that most of today’s heavy-burdened Smart Buildings are outsourcing these services to costly independent suppliers, which causes unnecessary management complexities, additional cost, and system incompatibility. The main conclusions argue that cloud-computing cost can differ depending on the Smart Building attributes and ICT requirements, and although in most cases cloud services are more convenient and cost effective at the early stages of the deployment and migration process, it can become costly in the future if not planned carefully using cost estimation service patterns. The results of the study can be exploited to enhance core competencies within Smart Buildings in order to maximize growth and attract new business opportunities
    corecore