15 research outputs found

    Energy Sharing Models for Renewable Energy Integration: Subtransmission Level, Distribution Level, and Community Level

    Full text link
    Distributed energy resources (DERs) are being embedded rapidly and widely in the power grid and promoting the transformation of the centralized power industry to a more deregulated mode. However, how to safely and efficiently consume renewable energy is becoming a major concern. In this regard, energy sharing at both grid-scale and community-scale has emerged as a new solution to encourage participants to actively bid instead of acting as price takers and has the potential to accelerate the integration of DERs and decrease energy costs. At the grid level, two risk-averse energy sharing models are developed to safely integrate renewable energy by considering the network constraints and overbidding risk. A risk-averse two-stage stochastic game model is proposed for the regional energy sharing market (ESM). The sample average approximation (SAA) method is used to approximate the stochastic Cournot-Nash equilibrium. In addition, a data-driven joint chance-constrained game is developed for energy sharing in the local energy market (LEM). This model considers the maximum outputs of renewable energy aggregators (REAs) are random variables whose probability distributions are unknown, but the decision-maker has access to finite samples. Case studies show that the proposed game models can effectively increase the profit of reliable players and decrease the overbidding risk. At the community level, a community server enables energy sharing among users based on the Bayesian game-based pricing mechanism. It can also control the community energy storage system (CESS) to smooth the load based on the grid's price signal. A communication-censored ADMM for sharing problems is developed to decrease the communication cost between the community and the grid. Moreover, a co-optimization model for the plan and operation of the shared CESS is developed. By introducing the price uncertainty and degradation cost, the proposed model could more accurately evaluate the performance of the CESS and tap more economic potential. This thesis provides proof of the Nash equilibrium of all game models and the convergence of all market clearing algorithms. The proposed models and methods present performance improvement compared with existing solutions. The work in this thesis indicates that energy sharing is possible to implement at different levels of the power system and could benefit the participants and promote the integration of DERs

    Image Reconstructions of Compressed Sensing MRI with Multichannel Data

    Get PDF
    Magnetic resonance imaging (MRI) provides high spatial resolution, high-quality of soft-tissue contrast, and multi-dimensional images. However, the speed of data acquisition limits potential applications. Compressed sensing (CS) theory allowing data being sampled at sub-Nyquist rate provides a possibility to accelerate the MRI scan time. Since most MRI scanners are currently equipped with multi-channel receiver systems, integrating CS with multi-channel systems can further shorten the scan time and also provide a better image quality. In this dissertation, we develop several techniques for integrating CS with parallel MRI. First, we propose a method which extends the reweighted l1 minimization to the CS-MRI with multi-channel data. The individual channel images are recovered according to the reweighted l1 minimization algorithm. Then, the final image is combined by the sum-of-squares method. Computer simulations show that the new method can improve the reconstruction quality at a slightly increased computation cost. Second, we propose a reconstruction approach using the ubiquitously available multi-core CPU to accelerate CS reconstructions of multiple channel data. CS reconstructions for phase array system using iterative l1 minimization are significantly time-consuming, where the computation complexity scales with the number of channels. The experimental results show that the reconstruction efficiency benefits significantly from parallelizing the CS reconstructions, and pipelining multi-channel data on multi-core processors. In our experiments, an additional speedup factor of 1.6 to 2.0 was achieved using the proposed method on a quad-core CPU. Finally, we present an efficient reconstruction method for high-dimensional CS MRI with a GPU platform to shorten the time of iterative computations. Data managements as well as the iterative algorithm are properly designed to meet the way of SIMD (single instruction/multiple data) parallelizations. For three-dimension multi-channel data, all slices along frequency encoding direction and multiple channels are highly parallelized and simultaneously processed within GPU. Generally, the runtime on GPU only requires 2.3 seconds for reconstructing a simulated 4-channel data with a volume size of 256×256×32. Comparing to 67 seconds using CPU, it achieves 28 faster with the proposed method. The rapid reconstruction algorithms demonstrated in this work are expected to help bring high dimensional, multichannel parallel CS MRI closer to clinical applications

    Self-organizing Coordination of Multi-Agent Microgrid Networks

    Get PDF
    abstract: This work introduces self-organizing techniques to reduce the complexity and burden of coordinating distributed energy resources (DERs) and microgrids that are rapidly increasing in scale globally. Technical and financial evaluations completed for power customers and for utilities identify how disruptions are occurring in conventional energy business models. Analyses completed for Chicago, Seattle, and Phoenix demonstrate site-specific and generalizable findings. Results indicate that net metering had a significant effect on the optimal amount of solar photovoltaics (PV) for households to install and how utilities could recover lost revenue through increasing energy rates or monthly fees. System-wide ramp rate requirements also increased as solar PV penetration increased. These issues are resolved using a generalizable, scalable transactive energy framework for microgrids to enable coordination and automation of DERs and microgrids to ensure cost effective use of energy for all stakeholders. This technique is demonstrated on a 3-node and 9-node network of microgrid nodes with various amounts of load, solar, and storage. Results found that enabling trading could achieve cost savings for all individual nodes and for the network up to 5.4%. Trading behaviors are expressed using an exponential valuation curve that quantifies the reputation of trading partners using historical interactions between nodes for compatibility, familiarity, and acceptance of trades. The same 9-node network configuration is used with varying levels of connectivity, resulting in up to 71% cost savings for individual nodes and up to 13% cost savings for the network as a whole. The effect of a trading fee is also explored to understand how electricity utilities may gain revenue from electricity traded directly between customers. If a utility imposed a trading fee to recoup lost revenue then trading is financially infeasible for agents, but could be feasible if only trying to recoup cost of distribution charges. These scientific findings conclude with a brief discussion of physical deployment opportunities.Dissertation/ThesisDoctoral Dissertation Systems Engineering 201

    Development of Distributed Energy Market:(Alternative Format Thesis)

    Get PDF

    Modeling the polygenic architecture of complex traits

    Get PDF
    Die Genomforschung ist innerhalb der letzten Jahre stark gewachsen. Fortschritte in der Sequenzierungstechnologie haben zu einer wahren Flut von genomweiten Daten geführt, die es uns ermöglichen, die genetische Architektur von komplexen Phänotypen detaillierter als jemals zuvor zu untersuchen. Selbst die modernsten Analysemethoden stoßen jedoch an ihre Grenzen, wenn die Effektgrößen zwischen den Markern zu stark schwanken, Störfaktoren die Analyse erschweren, oder die Abhängigkeiten zwischen verwandten Phänotypen ignoriert werden. Das Ziel dieser Arbeit ist es, mehrere Methoden zu entwickeln, die diese Herausforderungen effizient bewältigen können. Unser erster Beitrag ist der LMM-Lasso, ein Hybrid-Modell, das die Vorteile von Variablenselektion mit linearen gemischten Modellen verbindet. Dafür zerlegt er die phänotypische Varianz in zwei Komponenten: die erste besteht aus individuellen genetischen Effekten. Die zweite aus Effekten, die entweder durch Störfaktoren hervorgerufen werden oder zwar genetischer Natur sind, sich aber nicht auf individuelle Marker zurückführen lassen. Der Vorteil unseres Modells ist zum einen, dass die selektierten Koeffizienten leichter zu interpretieren sind als bei etablierte Standardverfahren und zum anderem diese auch an Vorhersagegenauigkeit übertroffen werden. Der zweite Beitrag beschreibt eine kritische Evaluierung verschiedener Lasso- Methoden, die a-priori bekannte strukturelle Informationen über die genetische Marker und den untersuchten Phänotypen benutzen. Wir bewerten die verschiedenen Ansätze auf Grund ihrer Vorhersagegenauigkeit auf simulierten Daten und auf Genexpressionsdaten in Hefe. Beide Experimente zeigen, dass Strukturinformationen nur dann helfen, wenn ihre Annahmen gerechtfertigt sind – sobald die Annahmen verletzt sind, hat die Zuhilfenahme der Strukturinformation den gegenteiligen Effekt. Um dem vorzubeugen, schlagen wir in unserem nächstem Beitrag vor, die Struktur zwischen den Phänotypen aus den Daten zu lernen. Im dritten Beitrag stellen wir ein effizientes Rechenverfahren für Multi-Task Gauss-Prozesse auf, das sowohl die genetische Verwandtschaft zwischen den Phänotypen als auch die Verwandtschaft der Residuen lernt. Unser Inferenzverfahren zeichnet sich durch einen verminderten Laufzeit- und Speicherbedarf aus und ermöglicht uns damit, die gemeinsame Heritabilität von Phänotypen auf großen Datensätzen zu untersuchen. Das Kapitel wird durch zwei Versuchsstudien vervollständigt; einer genomweiten Assoziationsstudie von Arabidopsis thaliana und einer Genexpressionsanalyse in Hefe, die bestätigen dass die neue Methode bessere Vorhersagen liefert. Die Vorteile der gemeinsamen Modellierung von Variablenselektion und Störfaktoren, sowie von Multi-Task Learning, werden in all unseren Versuchsreihen deutlich. Während sich unsere Experimente vor allem auf Anwendungen aus dem Bereich der Genomik konzentrieren, sind die von uns entwickelten Methoden jedoch allgemeingültig und können auch in anderen Feldern Anwendung finden

    Domain-Specific Modelling for Coordination Engineering

    Get PDF
    Multi-core processors offer increased speed and efficiency on various devices, from desktop computers to smartphones. But the challenge is not only how to gain the utmost performance, but also how to support portability, continuity with prevalent technologies, and the dissemination of existing principles of parallel software design. This thesis shows how model-driven software development can help engineering parallel systems. Rather than simply offering yet another programming approach for concurrency, it proposes using an explicit coordination model as the first development artefact. Key topics include: Basic foundations of parallel software design, coordination models and languages, and model-driven software development How Coordination Engineering eases parallel software design by separating concerns and activities across roles How the Space-Coordinated Processes (SCOPE) coordination model combines coarse-grained choreography of parallel processes with fine-grained parallelism within these processes Extensive experimental evaluation on SCOPE implementations and the application of Coordination Engineerin
    corecore