1,005 research outputs found

    Non-Intrusive Load Disaggregation of Industrial Cooling Demand with LSTM Neural Network

    Get PDF
    As the telecommunication industry becomes more and more energy intensive, energy efficiency actions are crucial and urgent measures to achieve energy savings. The main contribution to the energy demand of buildings devoted to the operation of the telecommunication network is cooling. The main issue in order to assess the impact of cooling equipment energy consumption to support energy managers with awareness over the buildings energy outlook is the lack of monitoring devices providing disaggregated load measurements. This work proposes a Non-Intrusive Load Disaggregation (NILD) tool that exploits a literature-based decomposition with an innovative LSTM Neural Network-based decomposition algorithm to assess cooling demand. The proposed methodology has been employed to analyze a real-case dataset containing aggregated load profiles from around sixty telecommunication buildings, resulting in accurate, compliant, and meaningful outcomes

    Privacy-Protecting Energy Management Unit through Model-Distribution Predictive Control

    Full text link
    The roll-out of smart meters in electricity networks introduces risks for consumer privacy due to increased measurement frequency and granularity. Through various Non-Intrusive Load Monitoring techniques, consumer behavior may be inferred from their metering data. In this paper, we propose an energy management method that reduces energy cost and protects privacy through the minimization of information leakage. The method is based on a Model Predictive Controller that utilizes energy storage and local generation, and that predicts the effects of its actions on the statistics of the actual energy consumption of a consumer and that seen by the grid. Computationally, the method requires solving a Mixed-Integer Quadratic Program of manageable size whenever new meter readings are available. We simulate the controller on generated residential load profiles with different privacy costs in a two-tier time-of-use energy pricing environment. Results show that information leakage is effectively reduced at the expense of increased energy cost. The results also show that with the proposed controller the consumer load profile seen by the grid resembles a mixture between that obtained with Non-Intrusive Load Leveling and Lazy Stepping.Comment: Accepted for publication in IEEE Transactions on Smart Grid 2017, special issue on Distributed Control and Efficient Optimization Methods for Smart Gri

    Disaggregating high-resolution gas metering data using pattern recognition

    Get PDF
    © 2018 Elsevier B.V. Growing concern about the scale and extent of the gap between predicted and actual energy performance of new and retrofitted UK homes has led to a surge in the development of new tools and technologies trying to address the problem. A vital aspect of this work is to improve ease and accuracy of measuring in-use performance to better understand the extent of the gap and diagnose its causes. Existing approaches range from low cost but basic assessments allowing very limited diagnosis, to intensively instrumented experiments that provide detail but are expensive and highly disruptive, typically requiring the installation of specialist monitoring equipment and often vacating the house for several days. A key challenge in reducing the cost and difficulty of complex methods in occupied houses is to disaggregate space heating energy from that used for other uses without installing specialist monitoring equipment. This paper presents a low cost, non-invasive approach for doing so for a typical occupied UK home where space heating, hot water and cooking are provided by gas. The method, using dynamic pattern matching of total gas consumption measurements, typical of those provided by a smart meter, was tested by applying it to two occupied houses in the UK. The findings revealed that this method was successful in detecting heating patterns in the data and filtering out coinciding use

    An Assessment to Benchmark the Seismic Performance of a Code-Conforming Reinforced-Concrete Moment-Frame Building

    Get PDF
    This report describes a state-of-the-art performance-based earthquake engineering methodology that is used to assess the seismic performance of a four-story reinforced concrete (RC) office building that is generally representative of low-rise office buildings constructed in highly seismic regions of California. This “benchmark” building is considered to be located at a site in the Los Angeles basin, and it was designed with a ductile RC special moment-resisting frame as its seismic lateral system that was designed according to modern building codes and standards. The building’s performance is quantified in terms of structural behavior up to collapse, structural and nonstructural damage and associated repair costs, and the risk of fatalities and their associated economic costs. To account for different building configurations that may be designed in practice to meet requirements of building size and use, eight structural design alternatives are used in the performance assessments. Our performance assessments account for important sources of uncertainty in the ground motion hazard, the structural response, structural and nonstructural damage, repair costs, and life-safety risk. The ground motion hazard characterization employs a site-specific probabilistic seismic hazard analysis and the evaluation of controlling seismic sources (through disaggregation) at seven ground motion levels (encompassing return periods ranging from 7 to 2475 years). Innovative procedures for ground motion selection and scaling are used to develop acceleration time history suites corresponding to each of the seven ground motion levels. Structural modeling utilizes both “fiber” models and “plastic hinge” models. Structural modeling uncertainties are investigated through comparison of these two modeling approaches, and through variations in structural component modeling parameters (stiffness, deformation capacity, degradation, etc.). Structural and nonstructural damage (fragility) models are based on a combination of test data, observations from post-earthquake reconnaissance, and expert opinion. Structural damage and repair costs are modeled for the RC beams, columns, and slabcolumn connections. Damage and associated repair costs are considered for some nonstructural building components, including wallboard partitions, interior paint, exterior glazing, ceilings, sprinkler systems, and elevators. The risk of casualties and the associated economic costs are evaluated based on the risk of structural collapse, combined with recent models on earthquake fatalities in collapsed buildings and accepted economic modeling guidelines for the value of human life in loss and cost-benefit studies. The principal results of this work pertain to the building collapse risk, damage and repair cost, and life-safety risk. These are discussed successively as follows. When accounting for uncertainties in structural modeling and record-to-record variability (i.e., conditional on a specified ground shaking intensity), the structural collapse probabilities of the various designs range from 2% to 7% for earthquake ground motions that have a 2% probability of exceedance in 50 years (2475 years return period). When integrated with the ground motion hazard for the southern California site, the collapse probabilities result in mean annual frequencies of collapse in the range of [0.4 to 1.4]x10 -4 for the various benchmark building designs. In the development of these results, we made the following observations that are expected to be broadly applicable: (1) The ground motions selected for performance simulations must consider spectral shape (e.g., through use of the epsilon parameter) and should appropriately account for correlations between motions in both horizontal directions; (2) Lower-bound component models, which are commonly used in performance-based assessment procedures such as FEMA 356, can significantly bias collapse analysis results; it is more appropriate to use median component behavior, including all aspects of the component model (strength, stiffness, deformation capacity, cyclic deterioration, etc.); (3) Structural modeling uncertainties related to component deformation capacity and post-peak degrading stiffness can impact the variability of calculated collapse probabilities and mean annual rates to a similar degree as record-to-record variability of ground motions. Therefore, including the effects of such structural modeling uncertainties significantly increases the mean annual collapse rates. We found this increase to be roughly four to eight times relative to rates evaluated for the median structural model; (4) Nonlinear response analyses revealed at least six distinct collapse mechanisms, the most common of which was a story mechanism in the third story (differing from the multi-story mechanism predicted by nonlinear static pushover analysis); (5) Soil-foundation-structure interaction effects did not significantly affect the structural response, which was expected given the relatively flexible superstructure and stiff soils. The potential for financial loss is considerable. Overall, the calculated expected annual losses (EAL) are in the range of 52,000to52,000 to 97,000 for the various code-conforming benchmark building designs, or roughly 1% of the replacement cost of the building (8.8M).Theselossesaredominatedbytheexpectedrepaircostsofthewallboardpartitions(includinginteriorpaint)andbythestructuralmembers.Lossestimatesaresensitivetodetailsofthestructuralmodels,especiallytheinitialstiffnessofthestructuralelements.Lossesarealsofoundtobesensitivetostructuralmodelingchoices,suchasignoringthetensilestrengthoftheconcrete(40EAL)orthecontributionofthegravityframestooverallbuildingstiffnessandstrength(15changeinEAL).Althoughthereareanumberoffactorsidentifiedintheliteratureaslikelytoaffecttheriskofhumaninjuryduringseismicevents,thecasualtymodelinginthisstudyfocusesonthosefactors(buildingcollapse,buildingoccupancy,andspatiallocationofbuildingoccupants)thatdirectlyinformthebuildingdesignprocess.Theexpectedannualnumberoffatalitiesiscalculatedforthebenchmarkbuilding,assumingthatanearthquakecanoccuratanytimeofanydaywithequalprobabilityandusingfatalityprobabilitiesconditionedonstructuralcollapseandbasedonempiricaldata.Theexpectedannualnumberoffatalitiesforthecodeconformingbuildingsrangesbetween0.05102and0.21102,andisequalto2.30102foranoncodeconformingdesign.Theexpectedlossoflifeduringaseismiceventisperhapsthedecisionvariablethatownersandpolicymakerswillbemostinterestedinmitigating.Thefatalityestimationcarriedoutforthebenchmarkbuildingprovidesamethodologyforcomparingthisimportantvalueforvariousbuildingdesigns,andenablesinformeddecisionmakingduringthedesignprocess.Theexpectedannuallossassociatedwithfatalitiescausedbybuildingearthquakedamageisestimatedbyconvertingtheexpectedannualnumberoffatalitiesintoeconomicterms.Assumingthevalueofahumanlifeis8.8M). These losses are dominated by the expected repair costs of the wallboard partitions (including interior paint) and by the structural members. Loss estimates are sensitive to details of the structural models, especially the initial stiffness of the structural elements. Losses are also found to be sensitive to structural modeling choices, such as ignoring the tensile strength of the concrete (40% change in EAL) or the contribution of the gravity frames to overall building stiffness and strength (15% change in EAL). Although there are a number of factors identified in the literature as likely to affect the risk of human injury during seismic events, the casualty modeling in this study focuses on those factors (building collapse, building occupancy, and spatial location of building occupants) that directly inform the building design process. The expected annual number of fatalities is calculated for the benchmark building, assuming that an earthquake can occur at any time of any day with equal probability and using fatality probabilities conditioned on structural collapse and based on empirical data. The expected annual number of fatalities for the code-conforming buildings ranges between 0.05*10 -2 and 0.21*10 -2 , and is equal to 2.30*10 -2 for a non-code conforming design. The expected loss of life during a seismic event is perhaps the decision variable that owners and policy makers will be most interested in mitigating. The fatality estimation carried out for the benchmark building provides a methodology for comparing this important value for various building designs, and enables informed decision making during the design process. The expected annual loss associated with fatalities caused by building earthquake damage is estimated by converting the expected annual number of fatalities into economic terms. Assuming the value of a human life is 3.5M, the fatality rate translates to an EAL due to fatalities of 3,500to3,500 to 5,600 for the code-conforming designs, and 79,800forthenoncodeconformingdesign.ComparedtotheEALduetorepaircostsofthecodeconformingdesigns,whichareontheorderof79,800 for the non-code conforming design. Compared to the EAL due to repair costs of the code-conforming designs, which are on the order of 66,000, the monetary value associated with life loss is small, suggesting that the governing factor in this respect will be the maximum permissible life-safety risk deemed by the public (or its representative government) to be appropriate for buildings. Although the focus of this report is on one specific building, it can be used as a reference for other types of structures. This report is organized in such a way that the individual core chapters (4, 5, and 6) can be read independently. Chapter 1 provides background on the performance-based earthquake engineering (PBEE) approach. Chapter 2 presents the implementation of the PBEE methodology of the PEER framework, as applied to the benchmark building. Chapter 3 sets the stage for the choices of location and basic structural design. The subsequent core chapters focus on the hazard analysis (Chapter 4), the structural analysis (Chapter 5), and the damage and loss analyses (Chapter 6). Although the report is self-contained, readers interested in additional details can find them in the appendices

    The Development of a Common Investment Appraisal for Urban Transport Projects.

    Get PDF
    In December 1990 we were invited by Birmingham City Council and Centro to submit a proposal for an introductory study of the development of a common investment appraisal for urban transport projects. Many of the issues had arisen during the Birmingham Integrated Transport Study (BITS) in which we were involved, and in the subsequent assessment of light rail schemes of which we have considerable experience. In subsequent discussion, the objectives were identified as being:- (i) to identify, briefly, the weaknesses with existing appraisal techniques; (ii) to develop proposals for common methods for the social cost-benefit appraisal of both urban road and rail schemes which overcome these weaknesses; (iii) to develop complementary and consistent proposals for common methods of financial appraisal of such projects; (iv) to develop proposals for variants of the methods in (ii) and (iii) which are appropriate to schemes of differing complexity and cost; (v) to consider briefly methods of treating externalities, and performance against other public sector goals, which are consistent with those developed under (ii) to (iv) above; (vi) to recommend work to be done in the second phase of the study (beyond March 1991) on the provision of input to such evaluation methods from strategic and mode-specific models, and on the testing of the proposed evaluation methods. Such issues are particularly topical at present, and we have been able to draw, in our study, on experience of:- (i) evaluation methods developed for BITS and subsequent integrated transport studies (MVA) (ii) evaluation of individual light rail and heavy rail investment projects (ITS,MVA); (iii) the recommendations of AMA in "Changing Gear" (iv) advice to IPPR on appraisal methodology (ITS); (v) submissions to the House of Commons enquiry into "Roads for the Future" (ITS); (vi) advice to the National Audit Office (ITS) (vii) involvement in the SACTRA study of urban road appraisal (MVA, ITS

    Taxonomy, Semantic Data Schema, and Schema Alignment for Open Data in Urban Building Energy Modeling

    Full text link
    Urban Building Energy Modeling (UBEM) is a critical tool to provide quantitative analysis on building decarbonization, sustainability, building-to-grid integration, and renewable energy applications on city, regional, and national scales. Researchers usually use open data as inputs to build and calibrate UBEM. However, open data are from thousands of sources covering various perspectives of weather, building characteristics, etc. Besides, a lack of semantic features of open data further increases the engineering effort to process information to be directly used for UBEM as inputs. In this paper, we first reviewed open data types used for UBEM and developed a taxonomy to categorize open data. Based on that, we further developed a semantic data schema for each open data category to maintain data consistency and improve model automation for UBEM. In a case study, we use three popular open data to show how they can be automatically processed based on the proposed schematic data structure using large language models. The accurate results generated by large language models indicate the machine-readability and human-interpretability of the developed semantic data schema
    corecore