323,833 research outputs found

    Exploratory analysis of high-resolution power interruption data reveals spatial and temporal heterogeneity in electric grid reliability

    Full text link
    Modern grid monitoring equipment enables utilities to collect detailed records of power interruptions. These data are aggregated to compute publicly reported metrics describing high-level characteristics of grid performance. The current work explores the depth of insights that can be gained from public data, and the implications of losing visibility into heterogeneity in grid performance through aggregation. We present an exploratory analysis examining three years of high-resolution power interruption data collected by archiving information posted in real-time on the public-facing website of a utility in the Western United States. We report on the size, frequency and duration of individual power interruptions, and on spatio-temporal variability in aggregate reliability metrics. Our results show that metrics of grid performance can vary spatially and temporally by orders of magnitude, revealing heterogeneity that is not evidenced in publicly reported metrics. We show that limited access to granular information presents a substantive barrier to conducting detailed policy analysis, and discuss how more widespread data access could help to answer questions that remain unanswered in the literature to date. Given open questions about whether grid performance is adequate to support societal needs, we recommend establishing pathways to make high-resolution power interruption data available to support policy research.Comment: Journal submission (in review), 22 pages, 8 figures, 1 tabl

    Method of Monte Carlo grid for data analysis

    Full text link
    This paper presents an analysis procedure for experimental data using theoretical functions generated by Monte Carlo. Applying the classical chi-square fitting procedure for some multiparameter systems is extremely difficult due to a lack of an analytical expression for the theoretical functions describing the system. The proposed algorithm is based on the least square method using a grid of Monte Carlo generated functions each corresponding to definite values of the minimization parameters. It is used for the E742 experiment (TRIUMF, Vancouver, Canada) data analysis with the aim to extract muonic atom scattering parameters on solid hydrogen.Comment: 16 pages, 10 figures, submitted to NI

    Mass production of event simulations for the BaBar experiment using the Grid

    Get PDF
    The BaBar experiment has been taking data since 1999, investigating the violation of charge and parity (CP) symmetry in the field of High Energy Physics. Event simulation is an intensive computing task, due to the complexity of the algorithm based on the Monte Carlo method implemented using the GEANT engine. The simulation input data are stored in ROOT format, they are classified into two categories: conditions data for describing the detector status when data are recorded, and background triggers data for including the noise signal necessary to obtain a realistic simulation. In order to satisfy these requirements, in the traditional BaBar computing model events are distributed over several sites involved in the collaboration where each site manager centrally manages a private farm dedicated to simulation production. The new grid approach applied to the BaBar production framework is discussed along with the schema adopted for data deployment via Xrootd/Scalla servers, including data management using grid middleware on distributed storage facilities spread over the INFN-GRID network. A comparison between the two models is provided, describing also the custom applications developed for performing the whole production task on the grid and showing the results achieved

    Gaze Embeddings for Zero-Shot Image Classification

    Get PDF
    Zero-shot image classification using auxiliary information, such as attributes describing discriminative object properties, requires time-consuming annotation by domain experts. We instead propose a method that relies on human gaze as auxiliary information, exploiting that even non-expert users have a natural ability to judge class membership. We present a data collection paradigm that involves a discrimination task to increase the information content obtained from gaze data. Our method extracts discriminative descriptors from the data and learns a compatibility function between image and gaze using three novel gaze embeddings: Gaze Histograms (GH), Gaze Features with Grid (GFG) and Gaze Features with Sequence (GFS). We introduce two new gaze-annotated datasets for fine-grained image classification and show that human gaze data is indeed class discriminative, provides a competitive alternative to expert-annotated attributes, and outperforms other baselines for zero-shot image classification

    Stochastic finite differences and multilevel Monte Carlo for a class of SPDEs in finance

    Get PDF
    In this article, we propose a Milstein finite difference scheme for a stochastic partial differential equation (SPDE) describing a large particle system. We show, by means of Fourier analysis, that the discretisation on an unbounded domain is convergent of first order in the timestep and second order in the spatial grid size, and that the discretisation is stable with respect to boundary data. Numerical experiments clearly indicate that the same convergence order also holds for boundary-value problems. Multilevel path simulation, previously used for SDEs, is shown to give substantial complexity gains compared to a standard discretisation of the SPDE or direct simulation of the particle system. We derive complexity bounds and illustrate the results by an application to basket credit derivatives

    Peer-to-Peer Metadata Management for Knowledge Discovery Applications in Grids

    Get PDF
    Computational Grids are powerful platforms gathering computational power and storage space from thousands of geographically distributed resources. The applications running on such platforms need to efficiently and reliably access the various and heterogeneous distributed resources they offer. This can be achieved by using metadata information describing all available resources. It is therefore crucial to provide efficient metadata management architectures and frameworks. In this paper we describe the design of a Grid metadata management service. We focus on a particular use case: the Knowledge Grid architecture which provides high-level Grid services for distributed knowledge discovery applications. Taking advantage of an existing Grid data-sharing service, namely JuxMem, the proposed solution lies at the border between peer-to-peer systems and Web services

    Atmospheric model development in support of SEASAT. Volume 1: Summary of findings

    Get PDF
    Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally
    corecore