113 research outputs found

    Optimization of energy-constrained resources in radial distribution networks with solar PV

    Get PDF
    The research objective of the proposed dissertation is to make best use of available distributed energy resources to meet dynamic market opportunities while accounting for AC physics of unbalanced distribution networks and the uncertainty of distributed solar photovoltaics (PV). With ever increasing levels of renewable generation, distribution system operations must shift from a mindset of static unidirectional power flows to dynamic, unpredictable bidirectional flows. To manage this variability, distributed energy resources (DERs; e.g.,solar PV inverters, inverter-based batteries, electric vehicles, water heaters, A/Cs) need to be coordinated for reliable and resilient operation. This introduces the challenge of coordinating such resources at scale and within confines of the existing distribution system. It also becomes important to develop efficient and accurate models of the distribution system to achieve desired operating objectives such as tracking a market reference, reduction in operation cost or voltage regulation. This work surveys, discusses the challenges and proposes solutions to the modeling and optimization of realistic distribution systems with significant penetration of renewables and controllable DERs, including energy storage. To contain this increase in system complexity as result of the large number of controllable DERs available, the distribution system has to be adapted from a passive Volt-Var focused operator to a more active manager of resources. To approach this challenge, in this work, we propose two main approaches. The first is a utility centric approach, where the utility controls the dispatch of flexible resources based on solving an optimization problem. This approach would require the utility to have all the network and resource data and also the control over customer devices. Another approach is a more aggregator centric approach, where an aggregator is an entity that represents an aggregation of many diverse DERs or a Virtual Battery (VB). In this approach, it is the role of the aggregator to dispatch DERs, whereas the utility provides certain bounds and limits (calculated offline), which the aggregator (which dispatches resources in real-time) must operate under. The benefits of such an approach lie in improved data-privacy and real-time dispatch. We present simulation results validating the proposed methods on various standard IEEE and realistic distribution feeders

    Providing QoS with Reduced Energy Consumption via Real-Time Voltage Scaling on Embedded Systems

    Get PDF
    Low energy consumption has emerged as one of the most important design objectives for many modern embedded systems, particularly the battery-operated PDAs. For some soft real-time applications such as multimedia applications, occasional deadline misses can be tolerated. How to leverage this feature to save more energy while still meeting the user required quality of service (QoS) is the research topic this thesis focuses on. We have proposed a new probabilistic design methodology, a set of energy reduction techniques for single and multiple processor systems by using dynamic voltage scaling (DVS), the practical solutions to voltage set-up problem for multiple voltage DVS system, and a new QoS metric. Most present design space exploration techniques, which are based on application's worst case execution time, often lead to over-designing systems. We have proposed the probabilistic design methodology for soft real-time embedded systems by using detailed execution time information in order to reduce the system resources while delivering the user required QoS probabilistically. One important phase in the probabilistic design methodology is the offline/online resource management. As an example, we have proposed a set of energy reduction techniques by employing DVS techniques to exploit the slacks arising from the tolerance to deadline misses for single and multiple processor systems while meeting the user required completion ratio statistically. Multiple-voltage DVS system is predicted as the future low-power system by International Technology Roadmap for Semiconductors (ITRS). In order to find the best way to employ DVS, we have formulated the voltage set-up problem and provided its practical solutions that seek the most energy efficient voltage setting for the design of multiple-voltage DVS systems. We have also presented a case study in designing energy-efficient dual voltage soft real-time system with (m, k)-firm deadline guarantee. Although completion ratio is widely used as a QoS metric, it can only be applied to the applications with independent tasks. We have proposed a new QoS metric that differentiates firm and soft deadlines and considers the task dependency as well. Based on this new metric, we have developed a set of online scheduling algorithms that enhance quality of presentation (QoP) significantly, particularly for overloaded systems

    Energy-Aware Scheduling for Streaming Applications

    Get PDF
    Streaming applications have become increasingly important and widespread,with application domains ranging from embedded devices to server systems.Traditionally, researchers have been focusing on improving the performanceof streaming applications to achieve high throughput and low response time.However, increasingly more attention is being shifted topower/performance trade-offbecause power consumption has become a limiting factor on system designas integrated circuits enter the realm of nanometer technology.This work addresses the problem of scheduling a streaming application(represented by a task graph)with the goal of minimizing its energy consumptionwhile satisfying its two quality of service (QoS) requirements,namely, throughput and response time.The available power management mechanisms are dynamic voltage scaling (DVS),which has been shown to be effective in reducing dynamic power consumption, andvary-on/vary-off, which turns processors on and off to save static power consumption.Scheduling algorithms are proposed for different computing platforms (uniprocessor and multiprocessor systems),different characteristics of workload (deterministic and stochastic workload),and different types of task graphs (singleton and general task graphs).Both continuous and discrete processor power models are considered.The highlights are a unified approach for obtaining optimal (or provably close to optimal)uniprocessor DVS schemes for various DVS strategies anda novel multiprocessor scheduling algorithm that exploits the differencebetween the two QoS requirements to perform processor allocation,task mapping, and task speedscheduling simultaneously

    System Development and VLSI Implementation of High Throughput and Hardware Efficient Polar Code Decoder

    Get PDF
    Polar code is the first channel code which is provable to achieve the Shannon capacity. Additionally, it has a very good performance in terms of low error floor. All these merits make it a potential candidate for the future standard of wireless communication or storage system. Polar code is received increasing research interest these years. However, the hardware implementation of hardware decoder still has not meet the expectation of practical applications, no matter from neither throughput aspect nor hardware efficient aspect. This dissertation presents several system development approaches and hardware structures for three widely known decoding algorithms. These algorithms are successive cancellation (SC), list successive cancellation (LSC) and belief propagation (BP). All the efforts are in order to maximize the throughput meanwhile minimize the hardware cost. Throughput centric successive cancellation (TCSC) decoder is proposed for SC decoding. By introducing the concept of constituent code, the decoding latency is significantly reduced with a negligible decoding performance loss. However, the specifically designed computation unites dramatically increase the hardware cost, and how to handle the conventional polar code sets and constituent codes sets makes the hardware implementation more complicated. By exploiting the natural property of conventional SC decoder, datapaths for decoding constituent codes are compatibly built via computation units sharing technique. This approach does not incur additional hardware cost expect some multiplexer logic, but can significantly increase the decoding throughput. Other techniques such as pre-computing and gate-level optimization are used as well in order to further increase the decoding throughput. A specific designed partial sum generator (PSG) is also investigated in this dissertation. This PSG is hardware efficient and timing compatible with proposed TCSC decoder. Additionally, a polar code construction scheme with constituent codes optimization is also presents. This construction scheme aims to reduce the constituent codes based SC decoding latency. Results show that, compared with the state-of-art decoder, TCSC can achieve at least 60% latency reduction for the codes with length n = 1024. By using Nangate FreePDK 45nm process, TCSC decoder can reach throughput up to 5.81 Gbps and 2.01 Gbps for (1024, 870) and (1024, 512) polar code, respectively. Besides, with the proposed construction scheme, the TCSC decoder generally is able to further achieve at least around 20% latency deduction with an negligible gain loss. Overlapped List Successive Cancellation (OLSC) is proposed for LSC decoding as a design approach. LSC decoding has a better performance than LS decoding at the cost of hardware consumption. With such approach, the l (l > 1) instances of successive cancellation (SC) decoder for LSC with list size l can be cut down to only one. This results in a dramatic reduction of the hardware complexity without any decoding performance loss. Meanwhile, approaches to reduce the latency associated with the pipeline scheme are also investigated. Simulation results show that with proposed design approach the hardware efficiency is increased significantly over the recently proposed LSC decoders. Express Journey Belief Propagation (XJBP) is proposed for BP decoding. This idea origins from extending the constituent codes concept from SC to BP decoding. Express journey refers to the datapath of specific constituent codes in the factor graph, which accelerates the belief information propagation speed. The XJBP decoder is able to achieve 40.6% computational complexity reduction with the conventional BP decoding. This enables an energy efficient hardware implementation. In summary, all the efforts to optimize the polar code decoder are presented in this dissertation, supported by the careful analysis, precise description, extensively numerical simulations, thoughtful discussion and RTL implementation on VLSI design platforms

    System Development and VLSI Implementation of High Throughput and Hardware Efficient Polar Code Decoder

    Get PDF
    Polar code is the first channel code which is provable to achieve the Shannon capacity. Additionally, it has a very good performance in terms of low error floor. All these merits make it a potential candidate for the future standard of wireless communication or storage system. Polar code is received increasing research interest these years. However, the hardware implementation of hardware decoder still has not meet the expectation of practical applications, no matter from neither throughput aspect nor hardware efficient aspect. This dissertation presents several system development approaches and hardware structures for three widely known decoding algorithms. These algorithms are successive cancellation (SC), list successive cancellation (LSC) and belief propagation (BP). All the efforts are in order to maximize the throughput meanwhile minimize the hardware cost. Throughput centric successive cancellation (TCSC) decoder is proposed for SC decoding. By introducing the concept of constituent code, the decoding latency is significantly reduced with a negligible decoding performance loss. However, the specifically designed computation unites dramatically increase the hardware cost, and how to handle the conventional polar code sets and constituent codes sets makes the hardware implementation more complicated. By exploiting the natural property of conventional SC decoder, datapaths for decoding constituent codes are compatibly built via computation units sharing technique. This approach does not incur additional hardware cost expect some multiplexer logic, but can significantly increase the decoding throughput. Other techniques such as pre-computing and gate-level optimization are used as well in order to further increase the decoding throughput. A specific designed partial sum generator (PSG) is also investigated in this dissertation. This PSG is hardware efficient and timing compatible with proposed TCSC decoder. Additionally, a polar code construction scheme with constituent codes optimization is also presents. This construction scheme aims to reduce the constituent codes based SC decoding latency. Results show that, compared with the state-of-art decoder, TCSC can achieve at least 60% latency reduction for the codes with length n = 1024. By using Nangate FreePDK 45nm process, TCSC decoder can reach throughput up to 5.81 Gbps and 2.01 Gbps for (1024, 870) and (1024, 512) polar code, respectively. Besides, with the proposed construction scheme, the TCSC decoder generally is able to further achieve at least around 20% latency deduction with an negligible gain loss. Overlapped List Successive Cancellation (OLSC) is proposed for LSC decoding as a design approach. LSC decoding has a better performance than LS decoding at the cost of hardware consumption. With such approach, the l (l > 1) instances of successive cancellation (SC) decoder for LSC with list size l can be cut down to only one. This results in a dramatic reduction of the hardware complexity without any decoding performance loss. Meanwhile, approaches to reduce the latency associated with the pipeline scheme are also investigated. Simulation results show that with proposed design approach the hardware efficiency is increased significantly over the recently proposed LSC decoders. Express Journey Belief Propagation (XJBP) is proposed for BP decoding. This idea origins from extending the constituent codes concept from SC to BP decoding. Express journey refers to the datapath of specific constituent codes in the factor graph, which accelerates the belief information propagation speed. The XJBP decoder is able to achieve 40.6% computational complexity reduction with the conventional BP decoding. This enables an energy efficient hardware implementation. In summary, all the efforts to optimize the polar code decoder are presented in this dissertation, supported by the careful analysis, precise description, extensively numerical simulations, thoughtful discussion and RTL implementation on VLSI design platforms
    • …
    corecore