11 research outputs found

    Effective Caching for the Secure Content Distribution in Information-Centric Networking

    Full text link
    The secure distribution of protected content requires consumer authentication and involves the conventional method of end-to-end encryption. However, in information-centric networking (ICN) the end-to-end encryption makes the content caching ineffective since encrypted content stored in a cache is useless for any consumer except those who know the encryption key. For effective caching of encrypted content in ICN, we propose a novel scheme, called the Secure Distribution of Protected Content (SDPC). SDPC ensures that only authenticated consumers can access the content. The SDPC is a lightweight authentication and key distribution protocol; it allows consumer nodes to verify the originality of the published article by using a symmetric key encryption. The security of the SDPC was proved with BAN logic and Scyther tool verification.Comment: 7 pages, 9 figures, 2018 IEEE 87th Vehicular Technology Conference (VTC Spring

    Towards an Integrated Full-Stack Green Software Development Methodology

    Get PDF
    Existing green/eco responsible approaches for IT are frequently domain-specific and very focused on one topic. For example, some works are focused on saving energy with better virtual machine management on cloud infrastructures or data management in wireless sensor networks, in order to minimize the data transfers and sensors’ wakeups. Nevertheless, they consider only limited aspects in the whole software development process; indeed, very few researches propose a global approach. In this context, we envision a green development methodology that approaches energy saving aspects from the design phase and at all the system layers (software, hardware, user requirements, execution contexts, etc.), which can provide positive leverage as well as avoid side effects (one decision can be positive at one system layer but may trigger negative impact on other layers). We motivate the interest of this vision and describe key ideas regarding how to address these considerations in the development methodology

    Green demand aware fog computing : a prediction-based dynamic resource provisioning approach

    Get PDF
    Fog computing could potentially cause the next paradigm shift by extending cloud services to the edge of the network, bringing resources closer to the end-user. With its close proximity to end-users and its distributed nature, fog computing can significantly reduce latency. With the appearance of more and more latency-stringent applications, in the near future, we will witness an unprecedented amount of demand for fog computing. Undoubtedly, this will lead to an increase in the energy footprint of the network edge and access segments. To reduce energy consumption in fog computing without compromising performance, in this paper we propose the Green-Demand-Aware Fog Computing (GDAFC) solution. Our solution uses a prediction technique to identify the working fog nodes (nodes serve when request arrives), standby fog nodes (nodes take over when the computational capacity of the working fog nodes is no longer sufficient), and idle fog nodes in a fog computing infrastructure. Additionally, it assigns an appropriate sleep interval for the fog nodes, taking into account the delay requirement of the applications. Results obtained based on the mathematical formulation show that our solution can save energy up to 65% without deteriorating the delay requirement performance. © 2022 by the authors. Licensee MDPI, Basel, Switzerland

    Secure Distribution of Protected Content in Information-Centric Networking

    Full text link
    The benefits of the ubiquitous caching in ICN are profound, such features make ICN promising for content distribution, but it also introduces a challenge to content protection against the unauthorized access. The protection of a content against unauthorized access requires consumer authentication and involves the conventional end-to-end encryption. However, in information-centric networking (ICN), such end-to-end encryption makes the content caching ineffective since encrypted contents stored in a cache are useless for any consumers except those who know the encryption key. For effective caching of encrypted contents in ICN, we propose a secure distribution of protected content (SDPC) scheme, which ensures that only authenticated consumers can access the content. SDPC is lightweight and allows consumers to verify the originality of the published content by using a symmetric key encryption. Moreover, SDPC naming scheme provides protection against privacy leakage. The security of SDPC was proved with the BAN logic and Scyther tool verification, and simulation results show that SDPC can reduce the content download delay.Comment: 15 pages, 8 figures, This article is an enhancement version of journal article published in IEEE Systems Journal, DOI: 10.1109/JSYST.2019.2931813. arXiv admin note: text overlap with arXiv:1808.0328

    Modelling the Power Cost of Application Software Running on Servers

    Get PDF
    One of the most important aspects of managing data centres is controlling the power consumption of applications running on servers. Developers, in particular, should evaluate each of their applications from a power consumption point of view. One can conduct an evaluation by creating models that predict power usage while running applications on servers. For this purpose, this study creates a non-exclusive test bench that can collect data on subsystem utilization by using a performance counter tool. Based on the selected subsystem performance, various models have been created to estimate the power consumption of applications running on servers. The author's models are created based on collecting the performance on four subsystems (i.e. the CPU, Memory, Disk and Interface) by Collectd tool, and the actual power consumption of a machine using a TED5000 power meter. These subsystems have been chosen because they are the components of the server that consume the most power. In addition, as the experiments in this study demonstrate, using these subsystems as the model's input is the most efficient selection across different hardware platforms. The accuracy of the models is affected by the model inputs selection. Creating the model requires several steps: (i) connect the power meter to the server and install all the required packages such as Collectd; (ii) perform workloads on the selected subsystems; (iii) collect and simplify the data (subsystems counters and actual power) that has been stored during performing the workloads; and (iv) train the data by a modelling technique in order to create the model. This work has seven dimensions; (i) collection of the performance counters and the actual power consumption of a system, and simplification of the collected data; (ii) introduction of a simple test bench for modelling and estimation of the power consumption of an application; (iii) introduction of two modelling techniques: Neural Network and Linear Regression; (iv) design of two types of workloads; (v) use of three real servers with different configurations; (vi) use of four scenarios to validate the models; (vii) proof of the importance of the subsystems selection; and (viii) automation of the test bench. With these models, power meter devices will no longer be necessary in measuring power consumption. Instead, the models can be used to predict power consumption. Generally, Neural Network models have fewer errors than Linear Regression models, and all the models (Neural Network or Linear Regression) perform better with long time workload design

    Caracterização computacional para alocação distribuída para uma configuração com interface natural de usuário

    Get PDF
    Dissertação (mestrado) - Universidade Federal de Santa Catarina, Centro Tecnológico, Programa de Pós-Graduação em Ciência da Computação, Florianópolis, 2015.Em um sistema distribuído heterogêneo, como grades computacionais, a escolha do sistema computacional para processar uma tarefa é realizada por meio de heurísticas adotadas igualmente para todos os sistemas. Os métodos atuais para avaliação da carga computacional, em grades heterogêneas, não levam em consideração características qualitativas que afetam o desempenho. Sistemas computacionais aparentemente idênticos, com as mesmas características quantitativas (tal como a quantidade de núcleos de processamento e de memória), podem apresentar desempenhos desiguais. O método proposto consiste em uma política de informação ao balanceamento de carga e tem como objetivo mensurar a carga dos sistemas computacionais por meio da avaliação de seus recursos quantitativos, tanto os imutáveis (como a quantidade de núcleos de processamento) quanto os mutáveis (como o percentual de memória livre), e qualitativos, inerentes à arquitetura do sistema computacional. A comparação da carga computacional entre os sistemas permite que o balanceamento de carga seja realizado mesmo em sistemas distribuídos heterogêneos para que seja possível a escolha do sistema computacional no qual executar uma tarefa da forma mais eficiente. Esta pesquisa utiliza a ferramenta CVFlow, uma Interface Natural de Usuário destinada ao balanceamento de carga, para avaliar o método proposto. O experimento consiste no escalonamento de um conjunto de tarefas e na comparação do método proposto com o estado da arte presente na literatura. O método proposto fornece um conjunto de melhorias que distribuem a carga de forma mais homogênea entre os sistemas computacionais, evitando, assim, sobrecarregar um sistema específico, além de oferecer um desempenho superior na execução do conjunto de tarefas.Abstract : In a distributed heterogeneous system, such as grids, the choice of a computer system to process a task is performed by means of heuristics adopted equally for all systems. Current methods for assessing the computing load, on heterogeneous grids, do not take into account qualitative characteristics that affect performance. Computer systems apparently identical, with the same quantitative traits (such as the number of processing cores and memory), may provide different performance. The proposed method consists of an information policy to load balancing. It aims to measure the load of a computer systems through the assessment of their quantitative and qualitative features. Quantitative, both immutable (as the number of cores) and mutable (as the percentage of free memory). And the qualitative, inherent to the computer system architecture. Comparison of computational load between systems allows load balancing to be performed even in heterogeneous distributed systems, to be able to choose the computer system on which to perform a task more efficiently. This research uses the CVFlow tool, a Natural User Interface intended for load balancing, to evaluate the proposed method. The experiment consists of the scheduling of a set of tasks and the comparison of the proposed method with the state of the art. The proposed method provides a set of improvements that distribute the load more evenly among computer systems, avoid overloading a particular system, and provides a better performance on the execution of the set of tasks

    A survey of power-saving techniques on data centers and content delivery networks

    No full text
    How to reduce power consumption within individual data centers has attracted major research efforts in the past decade, as their energy bills have contributed significantly to the overall operating costs. In recent years, increasing research efforts have also been devoted to the design of practical powersaving techniques in content delivery networks (CDNs), as they involve thousands of globally distributed data centers with content server clusters. In this paper, we present a comprehensive survey on existing research works aiming to save power in data centers and content delivery networks that share high degree of commonalities in different aspects. We firstly highlight the necessities of saving power in these two types of networks, followed by the identification of four major power-saving strategies that have been widely exploited in the literature. Furthermore, we present a high-level overview of the literature by categorizing existing approaches with respect to their scopes and research directions. These schemes are later analyzed with respect to their strategies, advantages and imitations. In the end, we summarize several key aspects that are considered to be crucial in effective power-saving schemes. We also highlight a number of our envisaged open research directions in the relevant areas that are of significance and hence require further elaborations

    A survey of power-saving techniques on data centers and content delivery networks

    Get PDF
    How to reduce power consumption within individual data centers has attracted major research efforts in the past decade, as their energy bills have contributed significantly to the overall operating costs. In recent years, increasing research efforts have also been devoted to the design of practical powersaving techniques in content delivery networks (CDNs), as they involve thousands of globally distributed data centers with content server clusters. In this paper, we present a comprehensive survey on existing research works aiming to save power in data centers and content delivery networks that share high degree of commonalities in different aspects. We firstly highlight the necessities of saving power in these two types of networks, followed by the identification of four major power-saving strategies that have been widely exploited in the literature. Furthermore, we present a high-level overview of the literature by categorizing existing approaches with respect to their scopes and research directions. These schemes are later analyzed with respect to their strategies, advantages and imitations. In the end, we summarize several key aspects that are considered to be crucial in effective power-saving schemes. We also highlight a number of our envisaged open research directions in the relevant areas that are of significance and hence require further elaborations
    corecore