8,272 research outputs found

    Cellular networks for smart grid communication

    Get PDF
    The next-generation electric power system, known as smart grid, relies on a robust and reliable underlying communication infrastructure to improve the efficiency of electricity distribution. Cellular networks, e.g., LTE/LTE-A systems, appear as a promising technology to facilitate the smart grid evolution. Their inherent performance characteristics and well-established ecosystem could potentially unlock unprecedented use cases, enabling real-time and autonomous distribution grid operations. However, cellular technology was not originally intended for smart grid communication, associated with highly-reliable message exchange and massive device connectivity requirements. The fundamental differences between smart grid and human-type communication challenge the classical design of cellular networks and introduce important research questions that have not been sufficiently addressed so far. Motivated by these challenges, this doctoral thesis investigates novel radio access network (RAN) design principles and performance analysis for the seamless integration of smart grid traffic in future cellular networks. Specifically, we focus on addressing the fundamental RAN problems of network scalability in massive smart grid deployments and radio resource management for smart grid and human-type traffic. The main objective of the thesis lies on the design, analysis and performance evaluation of RAN mechanisms that would render cellular networks the key enabler for emerging smart grid applications. The first part of the thesis addresses the radio access limitations in LTE-based networks for reliable and scalable smart grid communication. We first identify the congestion problem in LTE random access that arises in large-scale smart grid deployments. To overcome this, a novel random access mechanism is proposed that can efficiently support real-time distribution automation services with negligible impact on the background traffic. Motivated by the stringent reliability requirements of various smart grid operations, we then develop an analytical model of the LTE random access procedure that allows us to assess the performance of event-based monitoring traffic under various load conditions and network configurations. We further extend our analysis to include the relation between the cell size and the availability of orthogonal random access resources and we identify an additional challenge for reliable smart grid connectivity. To this end, we devise an interference- and load-aware cell planning mechanism that enhances reliability in substation automation services. Finally, we couple the problem of state estimation in wide-area monitoring systems with the reliability challenges in information acquisition. Using our developed analytical framework, we quantify the impact of imperfect communication reliability in the state estimation accuracy and we provide useful insights for the design of reliability-aware state estimators. The second part of the thesis builds on the previous one and focuses on the RAN problem of resource scheduling and sharing for smart grid and human-type traffic. We introduce a novel scheduler that achieves low latency for distribution automation traffic while resource allocation is performed in a way that keeps the degradation of cellular users at a minimum level. In addition, we investigate the benefits of Device-to-Device (D2D) transmission mode for event-based message exchange in substation automation scenarios. We design a joint mode selection and resource allocation mechanism which results in higher data rates with respect to the conventional transmission mode via the base station. An orthogonal resource partition scheme between cellular and D2D links is further proposed to prevent the underutilization of the scarce cellular spectrum. The research findings of this thesis aim to deliver novel solutions to important RAN performance issues that arise when cellular networks support smart grid communication.Las redes celulares, p.e., los sistemas LTE/LTE-A, aparecen como una tecnología prometedora para facilitar la evolución de la próxima generación del sistema eléctrico de potencia, conocido como smart grid (SG). Sin embargo, la tecnología celular no fue pensada originalmente para las comunicaciones en la SG, asociadas con el intercambio fiable de mensajes y con requisitos de conectividad de un número masivo de dispositivos. Las diferencias fundamentales entre las comunicaciones en la SG y la comunicación de tipo humano desafían el diseño clásico de las redes celulares e introducen importantes cuestiones de investigación que hasta ahora no se han abordado suficientemente. Motivada por estos retos, esta tesis doctoral investiga los principios de diseño y analiza el rendimiento de una nueva red de acceso radio (RAN) que permita una integración perfecta del tráfico de la SG en las redes celulares futuras. Nos centramos en los problemas fundamentales de escalabilidad de la RAN en despliegues de SG masivos, y en la gestión de los recursos radio para la integración del tráfico de la SG con el tráfico de tipo humano. El objetivo principal de la tesis consiste en el diseño, el análisis y la evaluación del rendimiento de los mecanismos de las RAN que convertirán a las redes celulares en el elemento clave para las aplicaciones emergentes de las SGs. La primera parte de la tesis aborda las limitaciones del acceso radio en redes LTE para la comunicación fiable y escalable en SGs. En primer lugar, identificamos el problema de congestión en el acceso aleatorio de LTE que aparece en los despliegues de SGs a gran escala. Para superar este problema, se propone un nuevo mecanismo de acceso aleatorio que permite soportar de forma eficiente los servicios de automatización de la distribución eléctrica en tiempo real, con un impacto insignificante en el tráfico de fondo. Motivados por los estrictos requisitos de fiabilidad de las diversas operaciones en la SG, desarrollamos un modelo analítico del procedimiento de acceso aleatorio de LTE que nos permite evaluar el rendimiento del tráfico de monitorización de la red eléctrica basado en eventos bajo diversas condiciones de carga y configuraciones de red. Además, ampliamos nuestro análisis para incluir la relación entre el tamaño de celda y la disponibilidad de recursos de acceso aleatorio ortogonales, e identificamos un reto adicional para la conectividad fiable en la SG. Con este fin, diseñamos un mecanismo de planificación celular que tiene en cuenta las interferencias y la carga de la red, y que mejora la fiabilidad en los servicios de automatización de las subestaciones eléctricas. Finalmente, combinamos el problema de la estimación de estado en sistemas de monitorización de redes eléctricas de área amplia con los retos de fiabilidad en la adquisición de la información. Utilizando el modelo analítico desarrollado, cuantificamos el impacto de la baja fiabilidad en las comunicaciones sobre la precisión de la estimación de estado. La segunda parte de la tesis se centra en el problema de scheduling y compartición de recursos en la RAN para el tráfico de SG y el tráfico de tipo humano. Presentamos un nuevo scheduler que proporciona baja latencia para el tráfico de automatización de la distribución eléctrica, mientras que la asignación de recursos se realiza de un modo que mantiene la degradación de los usuarios celulares en un nivel mínimo. Además, investigamos los beneficios del modo de transmisión Device-to-Device (D2D) en el intercambio de mensajes basados en eventos en escenarios de automatización de subestaciones eléctricas. Diseñamos un mecanismo conjunto de asignación de recursos y selección de modo que da como resultado tasas de datos más elevadas con respecto al modo de transmisión convencional a través de la estación base. Finalmente, se propone un esquema de partición de recursos ortogonales entre enlaces celulares y D2Postprint (published version

    Gather-and-broadcast frequency control in power systems

    Full text link
    We propose a novel frequency control approach in between centralized and distributed architectures, that is a continuous-time feedback control version of the dual decomposition optimization method. Specifically, a convex combination of the frequency measurements is centrally aggregated, followed by an integral control and a broadcast signal, which is then optimally allocated at local generation units. We show that our gather-and-broadcast control architecture comprises many previously proposed strategies as special cases. We prove local asymptotic stability of the closed-loop equilibria of the considered power system model, which is a nonlinear differential-algebraic system that includes traditional generators, frequency-responsive devices, as well as passive loads, where the sources are already equipped with primary droop control. Our feedback control is designed such that the closed-loop equilibria of the power system solve the optimal economic dispatch problem

    Cross-layer design of multi-hop wireless networks

    Get PDF
    MULTI -hop wireless networks are usually defined as a collection of nodes equipped with radio transmitters, which not only have the capability to communicate each other in a multi-hop fashion, but also to route each others’ data packets. The distributed nature of such networks makes them suitable for a variety of applications where there are no assumed reliable central entities, or controllers, and may significantly improve the scalability issues of conventional single-hop wireless networks. This Ph.D. dissertation mainly investigates two aspects of the research issues related to the efficient multi-hop wireless networks design, namely: (a) network protocols and (b) network management, both in cross-layer design paradigms to ensure the notion of service quality, such as quality of service (QoS) in wireless mesh networks (WMNs) for backhaul applications and quality of information (QoI) in wireless sensor networks (WSNs) for sensing tasks. Throughout the presentation of this Ph.D. dissertation, different network settings are used as illustrative examples, however the proposed algorithms, methodologies, protocols, and models are not restricted in the considered networks, but rather have wide applicability. First, this dissertation proposes a cross-layer design framework integrating a distributed proportional-fair scheduler and a QoS routing algorithm, while using WMNs as an illustrative example. The proposed approach has significant performance gain compared with other network protocols. Second, this dissertation proposes a generic admission control methodology for any packet network, wired and wireless, by modeling the network as a black box, and using a generic mathematical 0. Abstract 3 function and Taylor expansion to capture the admission impact. Third, this dissertation further enhances the previous designs by proposing a negotiation process, to bridge the applications’ service quality demands and the resource management, while using WSNs as an illustrative example. This approach allows the negotiation among different service classes and WSN resource allocations to reach the optimal operational status. Finally, the guarantees of the service quality are extended to the environment of multiple, disconnected, mobile subnetworks, where the question of how to maintain communications using dynamically controlled, unmanned data ferries is investigated

    On battery recovery effect in wireless sensor nodes

    Get PDF
    With the perennial demand for longer runtime of battery-powered Wireless Sensor Nodes (WSNs), several techniques have been proposed to increase the battery runtime. One such class of techniques exploiting the battery recovery effect phenomenon claims that performing an intermittent discharge instead of a continuous discharge will increase the usable battery capacity. Several works in the areas of embedded systems and wireless sensor networks have assumed the existence of this recovery effect and proposed different power management techniques in the form of power supply architectures (multiple battery setup) and communication protocols (burst mode transmission) in order to exploit it. However, until now, a systematic experimental evaluation of the recovery effect has not been performed with real battery cells, using high accuracy battery testers to confirm the existence of this recovery phenomenon. In this paper, a systematic evaluation procedure is developed to verify the existence of this battery recovery effect. Using our evaluation procedure we investigated Alkaline, Nickel-Metal Hydride (NiMH) and Lithium-Ion (Li-Ion) battery chemistries, which are commonly used as power supplies for WSN applications. Our experimental results do not show any evidence of the aforementioned recovery effect in these battery chemistries. In particular, our results show a significant deviation from the stochastic battery models, which were used by many power management techniques. Therefore, the existing power management approaches that rely on this recovery effect do not hold in practice. Instead of a battery recovery effect, our experimental results show the existence of the rate capacity effect, which is the reduction of usable battery capacity with higher discharge power, to be the dominant electrochemical phenomenon that should be considered for maximizing the runtime of WSN applications. We outline power management techniques that minimize the rate capacity effect in order to obtain a higher energy output from the battery
    corecore