657 research outputs found

    Smart Grid Technologies in Europe: An Overview

    Get PDF
    The old electricity network infrastructure has proven to be inadequate, with respect to modern challenges such as alternative energy sources, electricity demand and energy saving policies. Moreover, Information and Communication Technologies (ICT) seem to have reached an adequate level of reliability and flexibility in order to support a new concept of electricity network—the smart grid. In this work, we will analyse the state-of-the-art of smart grids, in their technical, management, security, and optimization aspects. We will also provide a brief overview of the regulatory aspects involved in the development of a smart grid, mainly from the viewpoint of the European Unio

    Lessons learned in building a middleware for smart grids

    Get PDF
    Smart grids play an important role in the modernization and optimization of the existing electrical grid, to accomplish the current European Union Energy and Climate targets. Smart grids require distributed applications to manage the grid more efficiently. The performance of the distributed applications impacts on the communications delay time and on the timely interaction with the devices located in the users’ Home Area Networks. This paper presents the results of the ENCOURAGE project related to the development of a software platform to support smart grids. The work presented in this paper assesses four different middleware configurations and analyses the results on the delay performance tests. The results show that the mean end-to-end delay is between 310 ms and 453 ms in proper conditions. In terms of operational costs, the optimal configuration enables managing houses with less than 0.25 Euros per month per house. This paper justifies the maturity of the technology to support smart grids, and the possibility to transfer the ENCOURAGE project results to the industry.info:eu-repo/semantics/publishedVersio

    CoAP Infrastructure for IoT

    Get PDF
    The Internet of Things (IoT) can be seen as a large-scale network of billions of smart devices. Often IoT devices exchange data in small but numerous messages, which requires IoT services to be more scalable and reliable than ever. Traditional protocols that are known in the Web world does not fit well in the constrained environment that these devices operate in. Therefore many lightweight protocols specialized for the IoT have been studied, among which the Constrained Application Protocol (CoAP) stands out for its well-known REST paradigm and easy integration with existing Web. On the other hand, new paradigms such as Fog Computing emerges, attempting to avoid the centralized bottleneck in IoT services by moving computations to the edge of the network. Since a node of the Fog essentially belongs to relatively constrained environment, CoAP fits in well. Among the many attempts of building scalable and reliable systems, Erlang as a typical concurrency-oriented programming (COP) language has been battle tested in the telecom industry, which has similar requirements as the IoT. In order to explore the possibility of applying Erlang and COP in general to the IoT, this thesis presents an Erlang based CoAP server/client prototype ecoap with a flexible concurrency model that can scale up to an unconstrained environment like the Cloud and scale down to a constrained environment like an embedded platform. The flexibility of the presented server renders the same architecture applicable from Fog to Cloud. To evaluate its performance, the proposed server is compared with the mainstream CoAP implementation on an Amazon Web Service (AWS) Cloud instance and a Raspberry Pi 3, representing the unconstrained and constrained environment respectively. The ecoap server achieves comparable throughput, lower latency, and in general scales better than the other implementation in the Cloud and on the Raspberry Pi. The thesis yields positive results and demonstrates the value of the philosophy of Erlang in the IoT space

    Quality of Service Driven Runtime Resource Allocation in Reconfigurable HPC Architectures

    Get PDF
    Heterogeneous System Architectures (HSA) are gaining importance in the High Performance Computing (HPC) domain due to increasing computational requirements coupled with energy consumption concerns, which conventional CPU architectures fail to effectively address. Systems based on Field Programmable Gate Array (FPGA) recently emerged as an effective alternative to Graphical Processing Units (GPUs) for demanding HPC applications, although they lack the abstractions available in conventional CPU-based systems. This work tackles the problem of runtime resource management of a system using FPGA-based co-processors to accelerate multi-programmed HPC workloads. We propose a novel resource manager able to dynamically vary the number of FPGAs allocated to each of the jobs running in a multi-accelerator system, with the goal of meeting a given Quality of Service metric for the running jobs measured in terms of deadline or throughput. We implement the proposed resource manager in a commercial HPC system, evaluating its behavior with representative workloads

    MP-SDWN: a novel multipath-supported software defined wireless network architecture

    Get PDF

    Asynchronous Load Balancing and Auto-scaling: Mean-Field Limit and Optimal Design

    Full text link
    We introduce a Markovian framework for load balancing where classical algorithms such as Power-of-dd are combined with asynchronous auto-scaling features. These allow the net service capacity to scale up or down in response to the current load within the same timescale of job dynamics. This is inspired by serverless frameworks such as Knative, used among others by Google Cloud Run, where servers are software functions that can be flexibly instantiated in milliseconds according to user-defined scaling rules. In this context, load balancing and auto-scaling are employed together to optimize both user-perceived delay performance and energy consumption. In the literature, these mechanisms are synchronous or rely on a central queue. The architectural novelty of our work is to consider an asynchronous and decentralized system, as in Knative, which takes scalability to the next level. Under a general assumption on the auto-scaling process, we prove a mean-field limit theorem that provides an accurate approximation for the system dynamics when the mean demand and nominal service capacity grow large in proportion. We characterize the fixed points of the mean-field limit model and provide a simple condition telling whether or not all the available servers need to be turned on to handle the incoming demand. Then, we investigate how to design optimal auto-scaling rules and find a general condition able to drive the mean-field dynamics to delay and relative energy optimality, a situation where the user-perceived delay and the relative energy wastage induced by idle servers vanish. The proposed optimality condition suggests to scale up capacity if and only if the mean demand exceeds the overall rate at which servers become idle and active. This yields the definition of tractable optimization frameworks to trade off between energy and performance, which we show as an application of our work

    Saber: window-based hybrid stream processing for heterogeneous architectures

    Get PDF
    Modern servers have become heterogeneous, often combining multicore CPUs with many-core GPGPUs. Such heterogeneous architectures have the potential to improve the performance of data-intensive stream processing applications, but they are not supported by current relational stream processing engines. For an engine to exploit a heterogeneous architecture, it must execute streaming SQL queries with sufficient data-parallelism to fully utilise all available heterogeneous processors, and decide how to use each in the most effective way. It must do this while respecting the semantics of streaming SQL queries, in particular with regard to window handling. We describe SABER, a hybrid high-performance relational stream processing engine for CPUs and GPGPUs. SABER executes windowbased streaming SQL queries in a data-parallel fashion using all available CPU and GPGPU cores. Instead of statically assigning query operators to heterogeneous processors, SABER employs a new adaptive heterogeneous lookahead scheduling strategy, which increases the share of queries executing on the processor that yields the highest performance. To hide data movement costs, SABER pipelines the transfer of stream data between different memory types and the CPU/GPGPU. Our experimental comparison against state-ofthe-art engines shows that SABER increases processing throughput while maintaining low latency for a wide range of streaming SQL queries with small and large windows sizes
    • 

    corecore