153 research outputs found
Exploratory analysis of high-resolution power interruption data reveals spatial and temporal heterogeneity in electric grid reliability
Modern grid monitoring equipment enables utilities to collect detailed
records of power interruptions. These data are aggregated to compute publicly
reported metrics describing high-level characteristics of grid performance. The
current work explores the depth of insights that can be gained from public
data, and the implications of losing visibility into heterogeneity in grid
performance through aggregation. We present an exploratory analysis examining
three years of high-resolution power interruption data collected by archiving
information posted in real-time on the public-facing website of a utility in
the Western United States. We report on the size, frequency and duration of
individual power interruptions, and on spatio-temporal variability in aggregate
reliability metrics. Our results show that metrics of grid performance can vary
spatially and temporally by orders of magnitude, revealing heterogeneity that
is not evidenced in publicly reported metrics. We show that limited access to
granular information presents a substantive barrier to conducting detailed
policy analysis, and discuss how more widespread data access could help to
answer questions that remain unanswered in the literature to date. Given open
questions about whether grid performance is adequate to support societal needs,
we recommend establishing pathways to make high-resolution power interruption
data available to support policy research.Comment: Journal submission (in review), 22 pages, 8 figures, 1 tabl
Recommended from our members
Tracking the Reliability of the U.S. Electric Power System: An Assessment of Publicly Available Information Reported to State Public Utility Commissions
Large blackouts, such as the August 14-15, 2003 blackout in the northeasternUnited States and Canada, focus attention on the importance of reliable electric service. As public and private efforts are undertaken to improve reliability and prevent power interruptions, it is appropriate to assess their effectiveness. Measures of reliability, such as the frequency and duration of power interruptions, have been reported by electric utilities to state public utility commissions for many years. This study examines current state and utility practices for collecting and reporting electricity reliability information and discusses challenges that arise in assessing reliability because of differences among these practices. The study is based primarily on reliability information for 2006 reported by 123 utilities to 37 state public utility commissions
Customer Impact Evaluation for the 2009 Southern California Edison Participating Load Pilot
The 2009 Participating Load Pilot Customer Impact Evaluation provides evidence that short duration demand response events which cycle off air conditioners for less than thirty minutes in a hot, dry environment do not lead to a significant degradation in the comfort level of residents participating in the program. This was investigated using: (1) Analysis of interval temperature data collected from inside residences of select program participants; and (2) Direct and indirect customer feedback from surveys designed and implemented by Southern California Edison at the conclusion of the program season. There were 100 indoor temperature monitors that were acquired by LBNL for this study that transmitted temperature readings at least once per hour with corresponding timestamps during the program season, June-October, 2009. Recorded temperatures were transferred from the onsite telemetry devices to a mesh network, stored, and then delivered to KEMA for analysis. Following an extensive data quality review, temperature increases during each of the thirty demand response test events were calculated for each device. The results are as follows: (1) Even for tests taking place during outside temperatures in excess of 100 degrees Fahrenheit, over 85 percent of the devices measured less than a 0.5 degree Fahrenheit temperature increase indoors during the duration of the event. (2) For the increases that were observed, none was more than 5 degrees and it was extremely rare for increases to be more than 2 degrees. At the end of the testing season SCE and KEMA designed and conducted a survey of the a facilities and public works managers and approximately 100 customers feedback survey to assess the extent the PLP events were noticed or disrupted the comfort level of participants. While only a small sampling of 3 managers and 16 customer surveys were completed, their responses indicate: (1) No customer reported even a moderate level of discomfort from the cycling-off of their air conditioners during test events; and (2) Very few customers noticed any of the thirty events at all. The results of this study suggest that the impacts on comfort from short-duration interruptions of air-conditioners, even in very hot climates, are for the most part very modest, if they are even noticed at all. Still, we should expect that these impacts will increase with longer interruptions of air-conditioning. By the same token, we should also expect that they will be less significant in cooler climates
An Examination of Temporal Trends in Electricity Reliability Based on Reports from U.S. Electric Utilities
Since the 1960s, the U.S. electric power system has experienced a major blackout about once every 10 years. Each has been a vivid reminder of the importance society places on the continuous availability of electricity and has led to calls for changes to enhance reliability. At the root of these calls are judgments about what reliability is worth and how much should be paid to ensure it. In principle, comprehensive information on the actual reliability of the electric power system and on how proposed changes would affect reliability ought to help inform these judgments. Yet, comprehensive, national-scale information on the reliability of the U.S. electric power system is lacking. This report helps to address this information gap by assessing trends in U.S. electricity reliability based on information reported by electric utilities on power interruptions experienced by their customers. Our research augments prior investigations, which focused only on power interruptions originating in the bulk power system, by considering interruptions originating both from the bulk power system and from within local distribution systems. Our research also accounts for differences among utility reliability reporting practices by employing statistical techniques that remove the influence of these differences on the trends that we identify. The research analyzes up to 10 years of electricity reliability information collected from 155 U.S. electric utilities, which together account for roughly 50% of total U.S. electricity sales. The questions analyzed include: 1. Are there trends in reported electricity reliability over time? 2. How are trends in reported electricity reliability affected by the installation or upgrade of an automated outage management system? 3. How are trends in reported electricity reliability affected by the use of IEEE Standard 1366-2003
THE DOE-2 BUILDING ENERGY ANALYSIS PROGRAM
The DOE-2 Building Energy Analysis Program was designed to allow engineers and architects to perform design studies of whole-building energy use under actual weather conditions. Its development was guided by several objectives: 1) that the description of the building entered by the user be readily understood by non-computer scientists, 2) that, when available, the calculations be based upon well established algorithms, 3) that it permit the simulation of commonly available heating, ventilating, and air-conditioning (HVAC) equipment, 4) that the computer costs of the program be minimal, and 5) that the predicted energy use of a building be acceptably close to measured values. These objectives have been met. This paper is intended to give an overview of the program upon completion of the DOE-2.IC edition
Use of Frequency Response Metrics to Assess the Planning and Operating Requirements for Reliable Integration of Variable Renewable Generation
An interconnected electric power system is a complex system that must be operated within a safe frequency range in order to reliably maintain the instantaneous balance between generation and load. This is accomplished by ensuring that adequate resources are available to respond to expected and unexpected imbalances and restoring frequency to its scheduled value in order to ensure uninterrupted electric service to customers. Electrical systems must be flexible enough to reliably operate under a variety of"change" scenarios. System planners and operators must understand how other parts of the system change in response to the initial change, and need tools to manage such changes to ensure reliable operation within the scheduled frequency range. This report presents a systematic approach to identifying metrics that are useful for operating and planning a reliable system with increased amounts of variable renewable generation which builds on existing industry practices for frequency control after unexpected loss of a large amount of generation. The report introduces a set of metrics or tools for measuring the adequacy of frequency response within an interconnection. Based on the concept of the frequency nadir, these metrics take advantage of new information gathering and processing capabilities that system operators are developing for wide-area situational awareness. Primary frequency response is the leading metric that will be used by this report to assess the adequacy of primary frequency control reserves necessary to ensure reliable operation. It measures what is needed to arrest frequency decline (i.e., to establish frequency nadir) at a frequency higher than the highest set point for under-frequency load shedding within an interconnection. These metrics can be used to guide the reliable operation of an interconnection under changing circumstances
Why Are Outcomes Different for Registry Patients Enrolled Prospectively and Retrospectively? Insights from the Global Anticoagulant Registry in the FIELD-Atrial Fibrillation (GARFIELD-AF).
Background: Retrospective and prospective observational studies are designed to reflect real-world evidence on clinical practice, but can yield conflicting results. The GARFIELD-AF Registry includes both methods of enrolment and allows analysis of differences in patient characteristics and outcomes that may result. Methods and Results: Patients with atrial fibrillation (AF) and ≥1 risk factor for stroke at diagnosis of AF were recruited either retrospectively (n = 5069) or prospectively (n = 5501) from 19 countries and then followed prospectively. The retrospectively enrolled cohort comprised patients with established AF (for a least 6, and up to 24 months before enrolment), who were identified retrospectively (and baseline and partial follow-up data were collected from the emedical records) and then followed prospectively between 0-18 months (such that the total time of follow-up was 24 months; data collection Dec-2009 and Oct-2010). In the prospectively enrolled cohort, patients with newly diagnosed AF (≤6 weeks after diagnosis) were recruited between Mar-2010 and Oct-2011 and were followed for 24 months after enrolment. Differences between the cohorts were observed in clinical characteristics, including type of AF, stroke prevention strategies, and event rates. More patients in the retrospectively identified cohort received vitamin K antagonists (62.1% vs. 53.2%) and fewer received non-vitamin K oral anticoagulants (1.8% vs . 4.2%). All-cause mortality rates per 100 person-years during the prospective follow-up (starting the first study visit up to 1 year) were significantly lower in the retrospective than prospectively identified cohort (3.04 [95% CI 2.51 to 3.67] vs . 4.05 [95% CI 3.53 to 4.63]; p = 0.016). Conclusions: Interpretations of data from registries that aim to evaluate the characteristics and outcomes of patients with AF must take account of differences in registry design and the impact of recall bias and survivorship bias that is incurred with retrospective enrolment. Clinical Trial Registration: - URL: http://www.clinicaltrials.gov . Unique identifier for GARFIELD-AF (NCT01090362)
Improved risk stratification of patients with atrial fibrillation: an integrated GARFIELD-AF tool for the prediction of mortality, stroke and bleed in patients with and without anticoagulation.
OBJECTIVES: To provide an accurate, web-based tool for stratifying patients with atrial fibrillation to facilitate decisions on the potential benefits/risks of anticoagulation, based on mortality, stroke and bleeding risks. DESIGN: The new tool was developed, using stepwise regression, for all and then applied to lower risk patients. C-statistics were compared with CHA2DS2-VASc using 30-fold cross-validation to control for overfitting. External validation was undertaken in an independent dataset, Outcome Registry for Better Informed Treatment of Atrial Fibrillation (ORBIT-AF). PARTICIPANTS: Data from 39 898 patients enrolled in the prospective GARFIELD-AF registry provided the basis for deriving and validating an integrated risk tool to predict stroke risk, mortality and bleeding risk. RESULTS: The discriminatory value of the GARFIELD-AF risk model was superior to CHA2DS2-VASc for patients with or without anticoagulation. C-statistics (95% CI) for all-cause mortality, ischaemic stroke/systemic embolism and haemorrhagic stroke/major bleeding (treated patients) were: 0.77 (0.76 to 0.78), 0.69 (0.67 to 0.71) and 0.66 (0.62 to 0.69), respectively, for the GARFIELD-AF risk models, and 0.66 (0.64-0.67), 0.64 (0.61-0.66) and 0.64 (0.61-0.68), respectively, for CHA2DS2-VASc (or HAS-BLED for bleeding). In very low to low risk patients (CHA2DS2-VASc 0 or 1 (men) and 1 or 2 (women)), the CHA2DS2-VASc and HAS-BLED (for bleeding) scores offered weak discriminatory value for mortality, stroke/systemic embolism and major bleeding. C-statistics for the GARFIELD-AF risk tool were 0.69 (0.64 to 0.75), 0.65 (0.56 to 0.73) and 0.60 (0.47 to 0.73) for each end point, respectively, versus 0.50 (0.45 to 0.55), 0.59 (0.50 to 0.67) and 0.55 (0.53 to 0.56) for CHA2DS2-VASc (or HAS-BLED for bleeding). Upon validation in the ORBIT-AF population, C-statistics showed that the GARFIELD-AF risk tool was effective for predicting 1-year all-cause mortality using the full and simplified model for all-cause mortality: C-statistics 0.75 (0.73 to 0.77) and 0.75 (0.73 to 0.77), respectively, and for predicting for any stroke or systemic embolism over 1 year, C-statistics 0.68 (0.62 to 0.74). CONCLUSIONS: Performance of the GARFIELD-AF risk tool was superior to CHA2DS2-VASc in predicting stroke and mortality and superior to HAS-BLED for bleeding, overall and in lower risk patients. The GARFIELD-AF tool has the potential for incorporation in routine electronic systems, and for the first time, permits simultaneous evaluation of ischaemic stroke, mortality and bleeding risks. CLINICAL TRIAL REGISTRATION: URL: http://www.clinicaltrials.gov. Unique identifier for GARFIELD-AF (NCT01090362) and for ORBIT-AF (NCT01165710)
- …