33,389 research outputs found

    Annual Benefit Analysis of Integrating the Seasonal Hydrogen Storage into the Renewable Power Grids

    Full text link
    There have been growing interests in integrating hydrogen storage into the power grids with high renewable penetration levels. The economic benefits and power grid reliability are both essential for the hydrogen storage integration. In this paper, an annual scheduling model (ASM) for energy hubs (EH) coupled power grids is proposed to investigate the annual benefits of the seasonal hydrogen storage (SHS). Each energy hub consists of the hydrogen storage, electrolyzers and fuel cells. The electrical and hydrogen energy can be exchanged on the bus with energy hub. The physical constraints for both grids and EHs are enforced in ASM. The proposed ASM considers the intra-season daily operation of the EH coupled grids. Four typical daily profiles are used in ASM to represent the grid conditions in four seasons, which reduces the computational burden. Besides, both the intra-season and cross-season hydrogen exchange and storage are modeled in the ASM. Hence, the utilization of hydrogen storage is optimized on a year-round level. Numerical simulations are conducted on the IEEE 24-bus system. The simulation results indicate that the seasonal hydrogen storage can effectively save the annual operation cost and reduce the renewable curtailments.Comment: 5 pages, 4 figure

    Convolutional Neural Networks over Tree Structures for Programming Language Processing

    Full text link
    Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing interest in the artificial intelligence community. However, different from a natural language sentence, a program contains rich, explicit, and complicated structural information. Hence, traditional NLP models may be inappropriate for programs. In this paper, we propose a novel tree-based convolutional neural network (TBCNN) for programming language processing, in which a convolution kernel is designed over programs' abstract syntax trees to capture structural information. TBCNN is a generic architecture for programming language processing; our experiments show its effectiveness in two different program analysis tasks: classifying programs according to functionality, and detecting code snippets of certain patterns. TBCNN outperforms baseline methods, including several neural models for NLP.Comment: Accepted at AAAI-1

    Transmission Planning for Climate-impacted Renewable Energy Grid: Data Preparation, Model Improvement, and Evaluation

    Full text link
    As renewable energy is becoming the major resource in future grids, the weather and climate can have higher impact on the grid reliability. Transmission expansion planning (TEP) has the potential to reinforce a transmission network that is suitable for climate-impacted grids. In this paper, we propose a systematic TEP procedure for climate-impacted renewable energy-enriched grids. Particularly, this work developed an improved model for TEP considering climate impact (TEP-CI) and evaluated the system reliability with the obtained transmission investment plan. Firstly, we created climate-impacted spatio temporal future grid data to facilitate the TEP-CI study, which includes the future climate-dependent renewable production as well as the dynamic rating profiles of the Texas 123-bus backbone transmission system (TX-123BT). Secondly, we proposed the TEP-CI which considers the variation in renewable production and dynamic line rating, and obtained the investment plan for future TX-123BT. Thirdly, we presented a customized security-constrained unit commitment (SCUC) specifically for climate-impacted grids. The future grid reliability under various investment scenarios is analyzed, based on the daily operation conditions from SCUC simulations. The whole procedure presented in this paper enables numerical studies on grid planning considering climate-impacts. It can also serve as a benchmark for other TEP-CI research and performance evaluation.Comment: 9 pages, 8 figure

    2例十二指肠完全离断合并右半结肠损伤的护理体会

    Get PDF
    Objective: To study the nursing of patients suffering from completely severed duodenum with right-side colon injury. Method: To sum up the experience and understanding about mental nursing, basic nursing, nutritional support and drainage nursing by retrospectively analyzing the clinical data of two cases of patients suffering from completely severed duodenum with right-side colon injury. Result: One patient recovers almost to normal after surgery without any complication. Another patient suffers from abdominal residual infection after surgery and recovers passably by anti-inflammatory symptomatic treatment. Conclusion: Completely severed duodenum with right-side colon injury is the most complicated and intractable trauma of abdomen. The reasonable surgery method, the proper therapeutic measure and the intensive nursing are keys to successful cure.目的  探讨十二指肠离断合并右半结肠损伤病人的护理。方法  回顾分析2例严重十二指肠离断合并右半结肠损伤病人的临床资料,总结在心理护理、基础护理、营养支持及引流管护理等方面的经验与体会。结果  1例病人术后恢复良好,未出现任何并发症。1例恢复尚可,后出现腹腔残余感染,经抗炎对症处理而愈。结论 十二指肠损伤是最复杂,最难处理,同时又是最难救治的一种腹部创伤。合理的手术方式、正确的治疗措施及精心的术后护理是救治成功的关键

    Distilling Word Embeddings: An Encoding Approach

    Full text link
    Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems. This paper addresses the problem of distilling word embeddings for NLP tasks. We propose an encoding approach to distill task-specific knowledge from a set of high-dimensional embeddings, which can reduce model complexity by a large margin as well as retain high accuracy, showing a good compromise between efficiency and performance. Experiments in two tasks reveal the phenomenon that distilling knowledge from cumbersome embeddings is better than directly training neural networks with small embeddings.Comment: Accepted by CIKM-16 as a short paper, and by the Representation Learning for Natural Language Processing (RL4NLP) Workshop @ACL-16 for presentatio
    corecore