740 research outputs found

    Coordinated Optimal Voltage Control in Distribution Networks with Data-Driven Methods

    Get PDF
    Voltage control is facing significant challenges with the increasing integration of photovoltaic (PV) systems and electric vehicles (EVs) in active distribution networks. This is leading to major transformations of control schemes that require more sophisticated coordination between different voltage regulation devices in different timescales. Except for conventional Volt/Var control (VVC) devices such on-load tap change (OLTC) and capacitor banks (CBs), inverter-based PVs are encouraged to participate in voltage regulation considering their flexible reactive power regulation capability. With the vehicle to grid (V2G) technology and inverter-based interface at charging stations, the charging power of an EV can be also controlled to support voltages. These emerging technologies facilitate the development of two-stage coordinated optimal voltage control schemes. However, these new control schemes pursue a fast response speed with local control strategies in shorter snapshots, which fails to track the optimal solutions for the distribution system operation. The voltage control methods mainly aim to mitigate voltage violations and reduce network power loss, but they seldom focus on satisfying the various requirements of PV and EV customers. This may discourage customer-owned resources from participating in ancillary services such as voltage regulation. Moreover, model-based voltage control methods highly rely on the accurate knowledge of power system models and parameters, which is sometimes difficult to obtain in real-life distribution networks. The goal of this thesis is to propose a data-driven two-stage voltage control framework to fill the research gaps mentioned above, showing what frameworks, models and solution methods can be used in the optimal voltage control of modern active distribution systems to tackle the security and economic challenges posed by high integration of PVs and EVs

    Artificial Intelligence : from Research to Application ; the Upper-Rhine Artificial Intelligence Symposium (UR-AI 2019)

    Full text link
    The TriRhenaTech alliance universities and their partners presented their competences in the field of artificial intelligence and their cross-border cooperations with the industry at the tri-national conference 'Artificial Intelligence : from Research to Application' on March 13th, 2019 in Offenburg. The TriRhenaTech alliance is a network of universities in the Upper Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes \'ecoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students

    Achieving High Renewable Energy Integration in Smart Grids with Machine Learning

    Get PDF
    The integration of high levels of renewable energy into smart grids is crucial for achieving a sustainable and efficient energy infrastructure. However, this integration presents significant technical and operational challenges due to the intermittent nature and inherent uncertainty of renewable energy sources (RES). Therefore, the energy storage system (ESS) has always been bound to renewable energy, and its charge and discharge control has become an important part of the integration. The addition of RES and ESS comes with their complex control, communication, and monitor capabilities, which also makes the grid more vulnerable to attacks, brings new challenges to the cybersecurity. A large number of works have been devoted to the optimization integration of the RES and ESS system to the traditional grid, along with combining the ESS scheduling control with the traditional Optimal Power Flow (OPF) control. Cybersecurity problem focusing on the RES integrated grid has also gradually aroused researchers’ interest. In recent years, machine learning techniques have emerged in different research field including optimizing renewable energy integration in smart grids. Reinforcement learning (RL), which trains agent to interact with the environment by making sequential decisions to maximize the expected future reward, is used as an optimization tool. This dissertation explores the application of RL algorithms and models to achieve high renewable energy integration in smart grids. The research questions focus on the effectiveness, benefits of renewable energy integration to individual consumers and electricity utilities, applying machine learning techniques in optimizing the behaviors of the ESS and the generators and other components in the grid. The objectives of this research are to investigate the current algorithms of renewable energy integration in smart grids, explore RL algorithms, develop novel RL-based models and algorithms for optimization control and cybersecurity, evaluate their performance through simulations on real-world data set, and provide practical recommendations for implementation. The research approach includes a comprehensive literature review to understand the challenges and opportunities associated with renewable energy integration. Various optimization algorithms, such as linear programming (LP), dynamic programming (DP) and various RL algorithms, such as Deep Q-Learning (DQN) and Deep Deterministic Policy Gradient (DDPG), are applied to solve problems during renewable energy integration in smart grids. Simulation studies on real-world data, including different types of loads, solar and wind energy profiles, are used to evaluate the performance and effectiveness of the proposed machine learning techniques. The results provide insights into the capabilities and limitations of machine learning in solving the optimization problems in the power system. Compared with traditional optimization tools, the RL approach has the advantage of real-time implementation, with the cost being the training time and unguaranteed model performance. Recommendations and guidelines for practical implementation of RL algorithms on power systems are provided in the appendix

    Power Consumption Analysis, Measurement, Management, and Issues:A State-of-the-Art Review of Smartphone Battery and Energy Usage

    Get PDF
    The advancement and popularity of smartphones have made it an essential and all-purpose device. But lack of advancement in battery technology has held back its optimum potential. Therefore, considering its scarcity, optimal use and efficient management of energy are crucial in a smartphone. For that, a fair understanding of a smartphone's energy consumption factors is necessary for both users and device manufacturers, along with other stakeholders in the smartphone ecosystem. It is important to assess how much of the device's energy is consumed by which components and under what circumstances. This paper provides a generalized, but detailed analysis of the power consumption causes (internal and external) of a smartphone and also offers suggestive measures to minimize the consumption for each factor. The main contribution of this paper is four comprehensive literature reviews on: 1) smartphone's power consumption assessment and estimation (including power consumption analysis and modelling); 2) power consumption management for smartphones (including energy-saving methods and techniques); 3) state-of-the-art of the research and commercial developments of smartphone batteries (including alternative power sources); and 4) mitigating the hazardous issues of smartphones' batteries (with a details explanation of the issues). The research works are further subcategorized based on different research and solution approaches. A good number of recent empirical research works are considered for this comprehensive review, and each of them is succinctly analysed and discussed

    Semiconductor Memory Applications in Radiation Environment, Hardware Security and Machine Learning System

    Get PDF
    abstract: Semiconductor memory is a key component of the computing systems. Beyond the conventional memory and data storage applications, in this dissertation, both mainstream and eNVM memory technologies are explored for radiation environment, hardware security system and machine learning applications. In the radiation environment, e.g. aerospace, the memory devices face different energetic particles. The strike of these energetic particles can generate electron-hole pairs (directly or indirectly) as they pass through the semiconductor device, resulting in photo-induced current, and may change the memory state. First, the trend of radiation effects of the mainstream memory technologies with technology node scaling is reviewed. Then, single event effects of the oxide based resistive switching random memory (RRAM), one of eNVM technologies, is investigated from the circuit-level to the system level. Physical Unclonable Function (PUF) has been widely investigated as a promising hardware security primitive, which employs the inherent randomness in a physical system (e.g. the intrinsic semiconductor manufacturing variability). In the dissertation, two RRAM-based PUF implementations are proposed for cryptographic key generation (weak PUF) and device authentication (strong PUF), respectively. The performance of the RRAM PUFs are evaluated with experiment and simulation. The impact of non-ideal circuit effects on the performance of the PUFs is also investigated and optimization strategies are proposed to solve the non-ideal effects. Besides, the security resistance against modeling and machine learning attacks is analyzed as well. Deep neural networks (DNNs) have shown remarkable improvements in various intelligent applications such as image classification, speech classification and object localization and detection. Increasing efforts have been devoted to develop hardware accelerators. In this dissertation, two types of compute-in-memory (CIM) based hardware accelerator designs with SRAM and eNVM technologies are proposed for two binary neural networks, i.e. hybrid BNN (HBNN) and XNOR-BNN, respectively, which are explored for the hardware resource-limited platforms, e.g. edge devices.. These designs feature with high the throughput, scalability, low latency and high energy efficiency. Finally, we have successfully taped-out and validated the proposed designs with SRAM technology in TSMC 65 nm. Overall, this dissertation paves the paths for memory technologies’ new applications towards the secure and energy-efficient artificial intelligence system.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201

    Techniques of Energy-Efficient VLSI Chip Design for High-Performance Computing

    Get PDF
    How to implement quality computing with the limited power budget is the key factor to move very large scale integration (VLSI) chip design forward. This work introduces various techniques of low power VLSI design used for state of art computing. From the viewpoint of power supply, conventional in-chip voltage regulators based on analog blocks bring the large overhead of both power and area to computational chips. Motivated by this, a digital based switchable pin method to dynamically regulate power at low circuit cost has been proposed to make computing to be executed with a stable voltage supply. For one of the widely used and time consuming arithmetic units, multiplier, its operation in logarithmic domain shows an advantageous performance compared to that in binary domain considering computation latency, power and area. However, the introduced conversion error reduces the reliability of the following computation (e.g. multiplication and division.). In this work, a fast calibration method suppressing the conversion error and its VLSI implementation are proposed. The proposed logarithmic converter can be supplied by dc power to achieve fast conversion and clocked power to reduce the power dissipated during conversion. Going out of traditional computation methods and widely used static logic, neuron-like cell is also studied in this work. Using multiple input floating gate (MIFG) metal-oxide semiconductor field-effect transistor (MOSFET) based logic, a 32-bit, 16-operation arithmetic logic unit (ALU) with zipped decoding and a feedback loop is designed. The proposed ALU can reduce the switching power and has a strong driven-in capability due to coupling capacitors compared to static logic based ALU. Besides, recent neural computations bring serious challenges to digital VLSI implementation due to overload matrix multiplications and non-linear functions. An analog VLSI design which is compatible to external digital environment is proposed for the network of long short-term memory (LSTM). The entire analog based network computes much faster and has higher energy efficiency than the digital one

    SCALING UP TASK EXECUTION ON RESOURCE-CONSTRAINED SYSTEMS

    Get PDF
    The ubiquity of executing machine learning tasks on embedded systems with constrained resources has made efficient execution of neural networks on these systems under the CPU, memory, and energy constraints increasingly important. Different from high-end computing systems where resources are abundant and reliable, resource-constrained systems only have limited computational capability, limited memory, and limited energy supply. This dissertation focuses on how to take full advantage of the limited resources of these systems in order to improve task execution efficiency from different aspects of the execution pipeline. While the existing literature primarily aims at solving the problem by shrinking the model size according to the resource constraints, this dissertation aims to improve the execution efficiency for a given set of tasks from the following two aspects. Firstly, we propose SmartON, which is the first batteryless active event detection system that considers both the event arrival pattern as well as the harvested energy to determine when the system should wake up and what the duty cycle should be. Secondly, we propose Antler, which exploits the affinity between all pairs of tasks in a multitask inference system to construct a compact graph representation of the task set for a given overall size budget. To achieve the aforementioned algorithmic proposals, we propose the following hardware solutions. One is a controllable capacitor array that can expand the system’s energy storage on-the-fly. The other is a FRAM array that can accommodate multiple neural networks running on one system.Doctor of Philosoph

    Understanding Deregulated Retail Electricity Markets in the Future: A Perspective from Machine Learning and Optimization

    Full text link
    On top of Smart Grid technologies and new market mechanism design, the further deregulation of retail electricity market at distribution level will play a important role in promoting energy system transformation in a socioeconomic way. In today’s retail electricity market, customers have very limited ”energy choice,” or freedom to choose different types of energy services. Although the installation of distributed energy resources (DERs) has become prevalent in many regions, most customers and prosumers who have local energy generation and possible surplus can still only choose to trade with utility companies.They either purchase energy from or sell energy surplus back to the utilities directly while suffering from some price gap. The key to providing more energy trading freedom and open innovation in the retail electricity market is to develop new consumer-centric business models and possibly a localized energy trading platform. This dissertation is exactly pursuing these ideas and proposing a holistic localized electricity retail market to push the next-generation retail electricity market infrastructure to be a level playing field, where all customers have an equal opportunity to actively participate directly. This dissertation also studied and discussed opportunities of many emerging technologies, such as reinforcement learning and deep reinforcement learning, for intelligent energy system operation. Some improvement suggestion of the modeling framework and methodology are included as well.Ph.D.College of Engineering & Computer ScienceUniversity of Michigan-Dearbornhttps://deepblue.lib.umich.edu/bitstream/2027.42/145686/1/Tao Chen Final Dissertation.pdfDescription of Tao Chen Final Dissertation.pdf : Dissertatio

    Digital Twins for Lithium-Ion Battery Health Monitoring with Linked Clustering Model using VGG 16 for Enhanced Security Levels

    Get PDF
    Digital Twin (DT) has only been widely used since the   early 2000s. The concept of DT refers to the act of creating a  computerized replica of a physical item or physical process. There is   the physical world, the cyber world, a bridge between them, and a portal from the cyber world to the physical world. The goal of DT is   to create an accurate digital replica of a previously existent physical object by combining AI, IoT, deep learning, and data analytics. Using   the virtual copy in real time, DTs attempt to describe the actions of the physical object. Battery based DT's viability as a solution to the   industry's growing problems of degradation evaluation, usage  optimization, manufacturing irregularities, and possible second-life  applications, among others, are of fundamental importance. Through       the integration of real-time checking and DT elaboration, data can be   collected that could be used to determine which sensors/data used in a batteries to analyze their performance. This research proposes a          Linked Clustering Model using VGG 16 for Lithium-ion batteries   health condition monitoring (LCM-VGG-Li-ion-BHM). This work           explored the use of deep learning to extract battery information by           selecting the most important features gathered from the sensors. Data           from a digital twin analyzed using deep learning allowed us to         anticipate both typical and abnormal conditions, as well as those that   required closer attention. The proposed model when contrasted with            the existing models performs better in health condition monitoring
    • …
    corecore